This is a guide on building, modifying and contributing to GrapheneOS as a developer.

Smartphone targets:

aosp_taimen (Pixel 2 XL) - legacy

aosp_walleye (Pixel 2) - legacy

aosp_crosshatch (Pixel 3 XL)

aosp_blueline (Pixel 3)

aosp_bonito (Pixel 3a XL)

aosp_sargo (Pixel 3a)

aosp_coral (Pixel 4 XL)

aosp_flame (Pixel 4)

These are all fully supported production-ready targets supporting all the baseline security features and receiving full monthly security updates covering all firmware, kernel drivers, driver libraries / services and other device-specific code. A fully signed user build for these devices is a proper GrapheneOS release. Newer generation devices have stronger hardware / firmware security and hardware-based OS security features and are better development devices for that reason. It's not possible to work on everything via past generation devices. The best development devices are the Pixel 3, Pixel 3 XL, Pixel 3a, Pixel 3a XL, Pixel 4 and Pixel 4 XL.

Generic targets:

aosp_arm

aosp_arm64

aosp_mips

aosp_mips64

aosp_x86

aosp_x86_64

These generic targets can be used with the emulator along with many smartphones, tablets and other devices. These targets don't receive full monthly security updates, don't offer all of the baseline security features and are intended for development usage.

Providing proper support for a device or generic device family requires providing an up-to-date kernel and device support code including driver libraries, firmware and device SELinux policy extensions. Other than some special cases like the emulator, the generic targets rely on the device support code present on the device. Shipping all of this is necessary for full security updates and is tied to enabling verified boot / attestation. Pixel targets have a lot of device-specific hardening in the AOSP base along with some in GrapheneOS which needs to be ported over too. For example, various security features in the kernel including type-based Control Flow Integrity (CFI) and the shadow call stack are currently specific to the kernels for these devices.

SDK emulator targets:

sdk_phone_armv7

sdk_phone_arm64

sdk_phone_mips

sdk_phone_mips64

sdk_phone_x86

sdk_phone_x86_64

These are extended versions of the generic targets with extra components for the SDK. These targets don't receive full monthly security updates, don't provide all of the baseline security features and are intended for development usage.

Board targets:

hikey - legacy

hikey960

The hikey and hikey960 targets are not actively tested and have unresolved upstream memory corruption bugs uncovered by GrapheneOS security features. It boots, but there are major issues with the graphics drivers among other problems. The intention is to support them, but the necessary time has not yet been dedicated to it. These targets don't receive full monthly security updates, don't provide all of the baseline security features and are intended for development usage.

Arch Linux, Debian buster and Ubuntu 20.04 LTS are the officially supported operating systems for building GrapheneOS.

Dependencies for fetching and verifying the sources:

repo

python3 (for repo)

git (both for repo and manual usage)

gpg (both for repo and manual usage)

89GiB+ storage for a standard sync with history, 61GiB+ storage for a lightweight sync

Baseline build dependencies:

x86_64 Linux build environment (macOS is not supported, unlike AOSP which partially supports it)

Android Open Source Project build dependencies

16GiB of memory or more. Link-Time Optimization (LTO) creates huge peaks during linking and is mandatory for Control Flow Integrity (CFI). Linking Vanadium (Chromium) and the Linux kernel with LTO + CFI are the most memory demanding tasks.

100GiB+ of additional free storage space for a typical build of the entire OS for a multiarch device

en_US.UTF-8 locale supported

You can either obtain repo as a distribution package or the self-updating standalone version from the Android Open Source Project. The self-updating variant avoids dealing with out-of-date distribution packages and depends on GPG to verify updates.

The Android Open Source Project build system is designed to provide reliable and reproducible builds. To accomplish this, it provides a prebuilt toolchain and other utilities fulfilling most of the build dependency requirements itself. These prebuilt tools have reproducible builds themselves. It runs the build process within a loose sandbox to avoid accidental dependencies on the host system. The process of moving to a fully self-contained build process with minimal external dependencies is gradual and there are still dependencies that need to be installed on the host system.

The Linux kernel build process is not integrated into the rest of the AOSP build process, but does reuse the same prebuilts to make the build reproducible.

Additional Linux kernel build dependencies not provided by the source tree:

libgcc (for the host, not the target)

binutils (for the host, not the target)

The dependency on the host libgcc and binutils for building utilities during the build process will be phased out by moving to a pure LLVM-based toolchain alongside doing it for the target. This is lagging a bit behind for the kernel, particularly code built for the host.

Additional Android Open Source Project build dependencies not provided by the source tree:

diff (diffutils)

freetype2 and any OpenType/TrueType font (such as DejaVu but anything works) for OpenJDK despite it being a headless variant without GUI support

ncurses5 (provided by the source tree for some tools but not others)

rsync

unzip

zip

Additional android-prepare-vendor (for Pixel phones) dependencies:

OpenJDK (for the jar command)

python2

python2-protobuf

The signify tool (with the proper naming) is also required for signing factory images zips.

Since this is syncing the sources for the entire operating system and application layer, it will use a lot of bandwidth and storage space.

You likely want to use the most recent stable tag, not the development branch, even for developing a feature. It's easier to port between stable tags that are known to work properly than dealing with a moving target.

GrapheneOS is currently early in the process of being ported to Android 11, so the development branch is currently very barebones.

The 11 branch is the only active development branch for GrapheneOS development. Older branches are no longer maintained. It is currently used for all officially supported devices and should be used for the basis of ports to other devices. Occasionally, some devices may be supported through device support branches to avoid impacting other devices with changes needed to support them.

mkdir grapheneos-11 cd grapheneos-11 repo init -u https://github.com/GrapheneOS/platform_manifest.git -b 11 repo sync -j32

If your network is unreliable and repo sync fails, you can run the repo sync command again to continue from where it was interrupted. It handles connection failures robustly and you shouldn't start over from scratch.

Pick a specific release for a device from the releases page and download the source tree. Note that some devices use different Android Open Source Project branches so they can end up with different tags. Make sure to use the correct tag for a device. For devices without official support, use the latest tag marked as being appropriate for generic / other devices in the release notes.

mkdir grapheneos-TAG_NAME cd grapheneos-TAG_NAME repo init -u https://github.com/GrapheneOS/platform_manifest.git -b refs/tags/TAG_NAME

Verify the manifest:

gpg --recv-keys 65EEFE022108E2B708CBFCF7F9E712E59AF5F22A cd .repo/manifests git verify-tag $(git describe) cd ../..

Complete the source tree download:

repo sync -j32

The manifest for the latest stable release refers to the revisions in other repositories via commit hashes rather than tag names. This avoids the need to use a script to verify tag signatures across all the repositories, since they simply point to the same commits with the same hashes.

Note that the repo command itself takes care of updating itself and uses gpg to verify by default.

To update the source tree, run the repo init command again to select the branch or tag and then run repo sync -j32 again. You may need to add --force-sync if a repository switched from one source to another, such as when GrapheneOS forks an additional Android Open Source Project repository. You don't need to start over to switch between different branches or tags. You may need to run repo init again to continue down the same branch since GrapheneOS only provides a stable history via tags.

The kernel needs to be built in advance, since it uses a separate build system.

Prebuilts are provided for the Pixel 4 and 4 XL, so this step is optional for those. This will be done for the other devices in the future.

List of kernels corresponding to officially supported devices:

Pixel 2, Pixel 2 XL: wahoo - separate builds due to hardening Pixel 2: taimen Pixel 2 XL: walleye

Pixel 3, Pixel 3 XL, Pixel 3a, Pixel 3a XL: crosshatch - separate builds due to hardening Pixel 3: crosshatch Pixel 3 XL: blueline Pixel 3a, Pixel 3a XL: bonito

Pixel 4, Pixel 4 XL: coral

As part of the hardening in GrapheneOS, it uses fully monolithic kernel builds with dynamic kernel modules disabled. This improves the effectiveness of mitigations like Control Flow Integrity benefiting from whole program analysis. It also reduces attack surface and complexity including making the build system simpler. The kernel trees marked as using a separate build above need to have the device variant passed to the GrapheneOS kernel build script to select the device.

For the Pixel 3, Pixel 3 XL, Pixel 3a, Pixel 3a XL, Pixel 4 and Pixel 4 XL the kernel repository uses submodules for building in out-of-tree modules. You need to make sure the submodule sources are updated before building. In the future, this should end up being handled automatically by repo . There's no harm in running the submodule commands for other devices as they will simply not do anything.

For example, to build the kernel for blueline:

cd kernel/google/crosshatch git submodule sync git submodule update --init ./build.sh blueline

The kernel/google/wahoo repository is for the Pixel 2 and Pixel 2 XL, the kernel/google/crosshatch repository is for the Pixel 3, Pixel 3 XL, Pixel 3a and Pixel 3a XL and the kernel/google/coral repository is for the Pixel 4 and Pixel 4 XL.

The build has to be done from bash as envsetup.sh is not compatible with other shells like zsh.

Set up the build environment:

source script/envsetup.sh

Select the desired build target ( aosp_crosshatch is the Pixel 3 XL):

choosecombo release aosp_crosshatch user

For a development build, you may want to replace user with userdebug in order to have better debugging support. Production builds should be user builds as they are significantly more secure and don't make additional performance sacrifices to improve debugging.

Set OFFICIAL_BUILD=true to include the Updater app. You must change the URL in packages/apps/Updater/res/values/config.xml to your own domain. Using the official update server with a build signed with different keys will not work and will essentially perform a denial of service attack on our update service. If you try to use the official URL, the app will download an official update and will detect it as corrupted or tampered. It will delete the update and try to download it over and over again since it will never be signed with your key.

export OFFICIAL_BUILD=true

To reproduce a past build, you need to export BUILD_DATETIME and BUILD_NUMBER to the values set for the past build. These can be obtained from out/build_date.txt and out/build_number.txt in a build output directory and the ro.build.date.utc and ro.build.version.incremental properties which are also included in the over-the-air zip metadata rather than just the OS itself.

The signing process for release builds is done after completing builds and replaces the dm-verity trees, apk signatures, etc. and can only be reproduced with access to the same private keys. If you want to compare to production builds signed with different keys you need to stick to comparing everything other than the signatures.

Additionally, set OFFICIAL_BUILD=true per the instructions above to reproduce the official builds. Note that if you do not change the URL to your own domain, you must disable the Updater app before connecting the device to the internet, or you will be performing a denial of service attack on our official update server.

This section does not apply to devices where no extra vendor files are required (HiKey, HiKey 960, emulator, generic targets).

Many of these components are already open source, but not everything is set up to be built by the Android Open Source Project build system. Switching to building these components from source will be an incremental effort. In many cases, the vendor files simply need to be ignored and AOSP will already provide them instead. Firmware cannot generally be built from source even when sources are available, other than to verify that the official builds match the sources, since it has signature verification (which is an important part of the verified boot and attestation security model).

Extract the vendor files corresponding to the matching release:

vendor/android-prepare-vendor/execute-all.sh -d DEVICE -b BUILD_ID -o vendor/android-prepare-vendor mkdir -p vendor/google_devices rm -rf vendor/google_devices/DEVICE mv vendor/android-prepare-vendor/DEVICE/BUILD_ID/vendor/google_devices/* vendor/google_devices/

Note that android-prepare-vendor is non-deterministic unless a timestamp parameter is passed with --timestamp (seconds since Epoch).

Incremental builds (i.e. starting from the old build) usually work for development and are the normal way to develop changes. However, there are cases where changes are not properly picked up by the build system. For production builds, you should remove the remnants of any past builds before starting, particularly if there were non-trivial changes:

rm -r out

Next, start the build process with the m command:

m target-files-package

The -j parameter can be passed to m to set a specific number of jobs such as -j4 to use 4 jobs. By default, the build system sets the number of jobs to NumCPU() + 2 where NumCPU() is the number of available logical CPUs.

For an emulator build, always use the development build approach below.

The normal production build process involves building a target files package to be resigned with secure release keys and then converted into factory images and/or an update zip via the sections below. If you have a dedicated development device with no security requirements, you can save time by using the default build target rather than target-files-package. Leave the bootloader unlocked and flashing the raw images that are signed with the default public test keys.

To build the default build target:

m

Technically, you could generate test key signed update packages. However, there's no point of sideloading update packages when the bootloader is unlocked and there's no value in a locked bootloader without signing the build using release keys, since verified boot will be meaningless and the keys used to verify sideloaded updates are also public. The only reason to use update packages or a locked bootloader without signing the build with release keys would be testing that functionality and it makes a lot more sense to test it with proper signing keys rather than the default public test keys.

Keys need to be generated for resigning completed builds from the publicly available test keys. The keys must then be reused for subsequent builds and cannot be changed without flashing the generated factory images again which will perform a factory reset. Note that the keys are used for a lot more than simply verifying updates and verified boot.

The sample certificate subject ( CN=GrapheneOS ) should be replaced with your own information.

You should set a passphrase for the signing keys to keep them at rest until you need to sign a release with them. The GrapheneOS scripts ( make_key and encrypt_keys.sh ) encrypt the signing keys using scrypt for key derivation and AES256 as the cipher. If you use swap, make sure it's encrypted, ideally with an ephemeral key rather a persistent key to support hibernation. Even with an ephemeral key, swap will reduce the security gained from encrypting the keys since it breaks the guarantee that they become at rest as soon as the signing process is finished. Consider disabling swap, at least during the signing process.

The encryption passphrase for all the keys generated for a device needs to match for compatibility with the GrapheneOS scripts.

To generate keys for crosshatch (you should use unique keys per device variant):

mkdir -p keys/crosshatch cd keys/crosshatch ../../development/tools/make_key releasekey '/CN=GrapheneOS/' ../../development/tools/make_key platform '/CN=GrapheneOS/' ../../development/tools/make_key shared '/CN=GrapheneOS/' ../../development/tools/make_key media '/CN=GrapheneOS/' ../../development/tools/make_key networkstack '/CN=GrapheneOS/' openssl genrsa 4096 | openssl pkcs8 -topk8 -scrypt -out avb.pem ../../external/avb/avbtool extract_public_key --key avb.pem --output avb_pkmd.bin cd ../..

The avb_pkmd.bin file isn't needed for generating a signed release but rather to set the public key used by the device to enforce verified boot.

Generate a signify key for signing factory images:

signify -G -n -p keys/crosshatch/factory.pub -s keys/crosshatch/factory.sec

Remove the -n switch to set a passphrase. The signify tool doesn't provide a way to change the passphrase without generating a new key, so this is currently handled separately from encrypting the other keys and there will be a separate prompt for the passphrase. In the future, expect this to be handled by the same scripts along with the expectation of it using the same passphrase as the other keys.

You can (re-)encrypt your signing keys using the encrypt_keys script, which will prompt for the old passphrase (if any) and new passphrase:

script/encrypt_keys.sh keys/crosshatch

The script/decrypt_keys.sh script can be used to remove encryption, which is not recommended. The script exists primarily for internal usage to decrypt the keys in tmpfs to perform signing.

GrapheneOS disables updatable APEX components for the officially supported devices and targets inheriting from the mainline target, so APEX signing keys are not needed and this section can be ignored for unmodified builds.

GrapheneOS uses the TARGET_FLATTEN_APEX := true format to include APEX components as part of the base OS without supporting out-of-band updates.

If you don't disable updatable APEX packages, you need to generate an APK and AVB key for each APEX component and extend the GrapheneOS release.sh script to pass the appropriate parameters to replace the APK and AVB keys for each APEX component.

APEX components that are not flattened are a signed APK (used to verify updates) with an embedded filesystem image signed with an AVB key (for verified boot). Each APEX package must have a unique set of keys. GrapheneOS has no use for these out-of-band updates at this time and flattening APEX components avoids needing a bunch of extra keys and complexity.

For now, consult the upstream documentation on generating these keys. It will be covered here in the future.

Build and package up the tools needed to generate over-the-air update packages:

m otatools-package

Generate a signed release build with the release.sh script:

script/release.sh crosshatch

The factory images and update package will be in out/release-crosshatch-$BUILD_NUMBER . The update zip performs a full OS installation so it can be used to update from any previous version. More efficient incremental updates are used for official over-the-air GrapheneOS updates and can be generated by keeping around past signed target_files zips and generating incremental updates from those to the most recent signed target_files zip.

See the install guide for information on how to use the factory images. See the usage guide section on sideloading updates for information on how to use the update packages.

Running script/release.sh also generates channel metadata for the update server. If you configured the Updater client URL and set the build to include it (see the information on OFFICIAL_BUILD above), you can push signed over-the-air updates via the update system. Simply upload the update package to the update server along with the channel metadata for the release channel, and it will be pushed out to the update client. The $DEVICE-beta and $DEVICE-stable metadata provide the Beta and Stable release channels used by the update client. The $DEVICE-testing metadata provides provides an internal testing channel for the OS developers, which can be temporarily enabled using adb shell setprop sys.update.channel testing . The name is arbitrary and you can also use any other name for internal testing channels.

For GrapheneOS itself, the testing channel is used to push out updates to developer devices, followed by a sample future release to test that the release which is about to be pushed out to the Beta channel is able to update to a future release. Once it's tested internally, the release is pushed out to the Beta channel, and finally to the Stable channel after public testing. A similar approach is recommended for derivatives of GrapheneOS.

Incremental updates shipping only the changes between two versions can be generated as a much more efficient way of shipping updates than a full update package containing the entire operating system. The GrapheneOS Updater app will automatically use a delta update if one exists for going directly from the currently installed version to the latest release. In order to generate a delta update, the original signed target files package for both the source version and target version are needed. The script/generate_delta.sh script provides a wrapper script for generating delta updates by passing the device, source version build number and target version build number. For example:

script/generate_delta.sh crosshatch 2019.09.25.00 2019.10.07.21

The script assumes that the releases are organized in the following directory structure:

releases ├── 2019.09.25.00 │ └── release-crosshatch-2019.09.25.00 │ ├── crosshatch-factory-2019.09.25.00.zip │ ├── crosshatch-factory-2019.09.25.00.zip.sig │ ├── crosshatch-img-2019.09.25.00.zip │ ├── crosshatch-ota_update-2019.09.25.00.zip │ ├── crosshatch-target_files-2019.09.25.00.zip │ └── crosshatch-testing └── 2019.10.07.21 └── release-crosshatch-2019.10.07.21 ├── crosshatch-factory-2019.10.07.21.zip ├── crosshatch-factory-2019.10.07.21.zip.sig ├── crosshatch-img-2019.10.07.21.zip ├── crosshatch-ota_update-2019.10.07.21.zip ├── crosshatch-target_files-2019.10.07.21.zip └── crosshatch-testing

Incremental updates are uploaded alongside the update packages and update metadata on the static web server used as an update server. The update client will automatically check for an incremental update and use it if available. No additional metadata is needed to make incremental updates work.

Like the Android Open Source Project, GrapheneOS contains some code that's built separately and then bundled into the source tree as binaries. This section will be gradually expanded to cover building all of it.

Vanadium is a hardened fork of Chromium developed by GrapheneOS and used to provide the WebView and optionally the standalone browser app. It tracks the Chromium release cycles along with having additional updates for downstream changes to the privacy and security hardening patches, so it's updated at a different schedule than the monthly Android releases.

The browser and the WebView are independent applications built from the Chromium source tree. The GrapheneOS browser build is located at external/vanadium and the WebView is at external/chromium-webview.

See Chromium's Android build instructions for details on obtaining the prerequisites.

git clone https://github.com/GrapheneOS/Vanadium.git cd Vanadium git checkout $CORRECT_BRANCH_OR_TAG

Fetch the Chromium sources:

fetch --nohooks android

Sync to the latest stable release for Android (replace $VERSION with the correct value):

gclient sync -D --with_branch_heads -r $VERSION --jobs 32

Apply the GrapheneOS patches on top of the tagged release:

cd src git am --whitespace=nowarn ../patches/*.patch

Then, configure the build in the src directory:

gn args out/Default

Copy the GrapheneOS configuration from ../args.gn and save/exit the editor. Modify target_cpu as needed if the target is not arm64. For x86_64, the correct value for target_cpu is x64 , but note that the Android source tree refers to it as x86_64.

You need to set trichrome_certdigest to the correct value for your generated signing key. You can obtain this with the following command:

keytool -export-cert -alias vanadium -keystore vanadium.keystore | sha256sum

Build the components:

ninja -C out/Default/ trichrome_webview_64_32_apk trichrome_chrome_64_32_bundle trichrome_library_64_32_apk

Generate a signing key for Vanadium if this is the initial build:

cd .. keytool -genkey -v -keystore vanadium.keystore -storetype pkcs12 -alias vanadium -keyalg RSA -keysize 4096 -sigalg SHA512withRSA -validity 10000 -dname "cn=GrapheneOS" cd src

You will be prompted to enter a password which will be requested by the script for signing releases. You should back up the generated keystore with your other keys.

Generate TrichromeChrome.apk from the bundle and sign the apks:

../generate_release.sh

The apks need to be copied from out/Default/apks/release/*.apk into the Android source tree at external/vanadium/prebuilt/arm64/ with arm64 substituted with the correct value for other architectures (arm, x86, x86_64).

WebView provider apps need to be whitelisted in frameworks/base/core/res/res/xml/config_webview_packages . By default, only the Vanadium WebView is whitelisted.

The official releases of the Auditor and PdfViewer apps are bundled as an apk into external/ repositories. There are no modifications to these for GrapheneOS. These are built and signed with the standard gradle Android plugin build system.

A build of Seedvault is bundled as an apk into an external/ repository. There are no modifications made to it.

Manifests for stable releases are generated with repo manifest -r after tagging the release across all the repositories in a temporary branch and syncing to it. This provides a manifest referencing the commits by hashes instead of just tags to lock in the revisions. This makes verification of the releases simpler, since only the manifest tag needs to be verified rather than tags for each repository. This also means the whole release can be verified using the GrapheneOS signing key despite referencing many upstream repositories that are not forked by the GrapheneOS project.

It can be useful to set up a standalone installation of the SDK separate from the Android Open Source Project tree. This is how the prebuilt apps are built, rather than using the older branch of the SDK in the OS source tree.

Android Studio can also be set up to use an existing SDK and will recognize it and use it automatically if Android Studio is installed with an SDK installation already available and set up in the environment. You'll also likely want a working command-line SDK environment even if you do heavily use Android Studio.

Using the official releases of the SDK is recommended for simplicity, although with a lot of effort you can build everything yourself. Distribution packages are generally quite out-of-date and should be avoided. To set up a minimal SDK installation at ~/android/sdk without Android Studio:

mkdir -p ~/android/sdk/cmdline-tools cd ~/android/sdk/cmdline-tools curl -O https://dl.google.com/android/repository/commandlinetools-linux-6514223_latest.zip echo 'ef319a5afdb41822cb1c88d93bc7c23b0af4fc670abca89ff0346ee6688da797 commandlinetools-linux-6514223_latest.zip' | sha256sum -c unzip commandlinetools-linux-6514223_latest.zip rm commandlinetools-linux-6514223_latest.zip mv tools latest

Set ANDROID_HOME to point at the SDK installation in your current shell and shell profile configuration. You also need to add the cmdline-tools binaries to your PATH . For example:

export ANDROID_HOME="$HOME/android/sdk" export PATH="$HOME/android/sdk/cmdline-tools/latest/bin:$PATH"

Make cmdline-tools responsible for updating itself:

sdkmanager 'cmdline-tools;latest'

Install platform-tools for tools like adb and fastboot:

sdkmanager platform-tools

Add the platform-tools executables to your PATH :

export PATH="$HOME/android/platform-tools:$PATH"

For running the Compatibility Test Suite you'll also need the build-tools for aapt:

sdkmanager 'build-tools;30.0.2'

Add the build-tools executables to your PATH :

export PATH="$HOME/android/sdk/build-tools/30.0.2:$PATH"

For working with native code, you need the NDK:

sdkmanager ndk-bundle

Add the ndk-bundle executables to your PATH :

export PATH="$HOME/android/sdk/ndk-bundle:$PATH"

You should update the sdk before use from this point onwards:

sdkmanager --update

You can install Android Studio alongside the standalone SDK and it will detect it via the ANDROID_HOME environment variable rather than installing another copy of it. For example:

cd ~/android curl -O https://dl.google.com/dl/android/studio/ide-zips/4.0.1.0/android-studio-ide-193.6626763-linux.tar.gz echo 'f2f82744e735eae43fa018a77254c398a3bab5371f09973a37483014b73b7597 android-studio-ide-193.6626763-linux.tar.gz' | sha256sum -c tar xvf android-studio-ide-193.6626763-linux.tar.gz rm android-studio-ide-193.6626763-linux.tar.gz mv android-studio studio

Add the Android Studio executables to your PATH :

export PATH="$HOME/android/studio/bin:$PATH"

You can start it with studio.sh .

This section will be expanded to cover various test suites and testing procedures rather than only the current very minimal coverage of the Compatibility Test Suite (CTS).

To test a build for the emulator, run emulator within the build environment. The emulator will use CPU hardware acceleration via KVM along with optional graphics acceleration via the host GPU if these are available.

Testing with the Compatibility Test Suite (CTS) can be done by either building the test suite from source or using the official releases.

Official releases of the CTS can be downloaded from the Compatibility Suite Downloads page. You should download the CTS for the relevant release (Android 11) and architecture (ARM). There's a separate zip for the main CTS, the manual portion (CTS Verifier) and the CTS for Instant Apps. The latest release of the CTS Media Files also needs to be downloaded from that section.

mkdir -p ~/android/cts/{arm,x86} cd ~/android/cts/arm curl -O https://dl.google.com/dl/android/cts/android-cts-11_r1-linux_x86-arm.zip unzip android-cts-11_r1-linux_x86-arm.zip rm android-cts-11_r1-linux_x86-arm.zip curl -O https://dl.google.com/dl/android/cts/android-cts-verifier-11_r1-linux_x86-arm.zip unzip android-cts-verifier-11_r1-linux_x86-arm.zip rm android-cts-verifier-11_r1-linux_x86-arm.zip cd ~/android/cts/x86 curl -O https://dl.google.com/dl/android/cts/android-cts-verifier-11_r1-linux_x86-x86.zip unzip android-cts-verifier-11_r1-linux_x86-x86.zip rm android-cts-verifier-11_r1-linux_x86-x86.zip curl -O https://dl.google.com/dl/android/cts/android-cts-11_r1-linux_x86-x86.zip unzip android-cts-11_r1-linux_x86-x86.zip rm android-cts-11_r1-linux_x86-x86.zip cd ~/android/cts curl -O https://dl.google.com/dl/android/cts/android-cts-media-1.5.zip unzip android-cts-media-1.5.zip rm android-cts-media-1.5.zip

You'll need a device attached to your computer with ADB enabled along with the Android SDK installed. The build-tools and platform-tools packages need to be installed and the binaries need to be added to your PATH. See the standalone SDK installation instructions above.

Copy media onto the device:

cd android-cts-media-1.5 ./copy_images.sh ./copy_media.sh

You also need to do some basic setup for the device. It's possible for changes from a baseline install to cause interference, so it can be a good idea to factory reset the device if assorted changes have been made. The device needs to be running a user build for the security model to be fully intact in order to pass all the security tests. A userdebug build is expected to fail some of the tests. GrapheneOS also makes various changes intentionally deviating from the requirements expected by the CTS, so there will always be some expected failures. A few of the tests are also known to be quite flaky or broken even with the stock OS and/or AOSP. These will be documented here at some point.

Must be connected to a WiFi network with IPv6 internet access

Must have a working SIM card with mobile data with IPv6 internet access

Disable SIM lock

Enable Bluetooth

Enable NFC and NDEF (Android Beam)

Open / close Chromium to deal with initial setup

Prop up with a good object to focus on and good lighting for Camera tests. Both the front and rear cameras will be used, so ensure this is true for both the front and the rear cameras.

Bluetooth beacons for Bluetooth tests

Must have a great GPS/GNSS signal for location tests

SIM card with carrier privilege rules

Secure element applet installed on the embedded secure element or SIM card

At least one Wi-Fi RTT access point powered up but not connected to any network

The screen lock must be disabled.

Run the test harness:

./android-cts/tools/cts-tradefed

Note that _JAVA_OPTIONS being set will break the version detection.

To obtain a list of CTS modules:

list modules

To run a specific module and avoid wasting time capturing device information:

run cts --skip-device-info --module CtsModuleName

To speed up initialization after running some initial tests:

run cts --skip-device-info --skip-preconditions --module CtsModuleName

It's possible to run the whole standard CTS plan with a single command, but running specific modules is recommended, especially if you don't have everything set up for the entire test suite.

The Android Open Source Project has branches and/or tags for the releases of many different components. There are tags and/or branches for the OS, device kernels, mainline components (APEX), the NDK, Android Studio, the platform-tools distribution packages, the CTS, androidx components, etc. You should obtain the sources via manifests using the repo tool, either using the manifest for a tag / branch in platform/manifest.git or a manifest provided elsewhere. Different projects use different subsets of the repositories. Many of the repositories only exist as an archive for older releases and aren't referenced in current manifests.

Some components don't have the infrastructure set up to generate and push their own branches and tags to AOSP. In other cases, it's simply not obvious to an outsider which one should be used. As long as the component is built on the standard Android project CI infrastructure, it's possible to obtain the manifests to build it based on the build number, which is generally incorporated into the build. For example, even without a platform-tools tag, you can obtain the build number from adb version or fastboot version . Their version output uses the format $VERSION-$BUILD_NUMBER such as 30.0.3-6597393 for the version 30.0.3 where the official release had the build number 6597393 . You can obtain the manifest properties with the appropriate repository revisions from ci.android.com with a URL like this: https://ci.android.com/builds/submitted/6597393/sdk/latest/view/repo.prop

The platform-tools tags exist because the GrapheneOS project requested them. The same could be done for other projects, but it's not strictly necessarily as long as it's possible to obtain the build number to request the information from the Android project CI server.

As another kind of example, prebuilts/clang , prebuilts/build-tools , etc. have a manifest file committed alongside the prebuilts. Other AOSP toolchain prebuilts reference a build number.

The following programming languages are acceptable for completely new GrapheneOS projects:

Kotlin for apps and any services closely tied to the apps, now that it's not only officially supported by the Android SDK and Android Studio but also the default language with Kotlin exclusive enhancements to the APIs

Web applications must be entirely static HTML/CSS/JavaScript. TypeScript would make sense at a larger scale but there are no plans for any large web applications.

Rust with no_std for low-level code used in a hypervisor, kernel, daemon, system library, etc. Keep in mind that low-level code is to be avoided whenever a higher-level language is better suited to the job. In general, the project aims to avoid creating more low-level code manually dealing with memory ownership and lifetimes in the first place.

for low-level code used in a hypervisor, kernel, daemon, system library, etc. Keep in mind that low-level code is to be avoided whenever a higher-level language is better suited to the job. In general, the project aims to avoid creating more low-level code manually dealing with memory ownership and lifetimes in the first place. C in rare cases for very small and particularly low-level projects without opportunities to reduce the trusted computing base for memory corruption to any significant degree with Rust, such as for the hardened_malloc project

arm64 assembly in extremely rare cases where C or Rust aren't usable with compiler intrinsics

Python 3 for small (less than 500 lines) development-related scripts that are not exposed to untrusted input. It's never acceptable to use it for client-side code on devices or for servers. It isn't used on the servers even for non-application-server code.

Bash for tiny (less than 200 lines) build scripts without any non-trivial logic where Python would be an annoyance.

Much of the work is done on existing projects, and the existing languages should be used unless there are already clear stable API boundaries where a different language could be used without causing a substantial maintenance burden. The following languages are typical from most to least common: Java, C++, C, JavaScript, arm64 assembly, POSIX shell, Bash.

For existing projects, use the official upstream code style. Avoid using legacy conventions that they're moving away from themselves. Follow the code style they use for new additions. Some projects have different code styles for different directories or files depending on their sources, in which case respect the per-file style.

For new projects, follow the official code style for the language. Treat the standard library APIs as defining the naming style for usage of the language, i.e. C uses variable_or_function_name , type_name , MACRO_NAME while JavaScript uses variable_or_function_name , ClassName and CONSTANT_NAME . For Python, follow PEP8 and the same goes for other languages with official styles whether defined in a document or by the default mode for the official formatting tool like rustfmt .

For cases where there isn't an official or prevailing code style for other things, avoid tabs, use 4-space indents, function_name , variable_name , TypeName and CONSTANT_NAME . Prefer single-line comment syntax other than rare cases where it makes sense to add a tiny comment within a line of code. In languages with the optional braces misfeature (C, C++, Java), always use them. Open braces on the same line as function definitions / statements. Wrap lines at 100 columns except in rare cases where it would be far uglier to wrap the line.

For JavaScript, all code should be contained within ES6 modules. This means every script element should use type="module" . Modules provide proper namespacing with explicit imports and exports. Modules automatically use strict mode, so "use strict"; is no longer needed. By default, modules are also deferred until after the DOM is ready, i.e. they have an implicit defer attribute. This should be relied upon rather than unnecessarily listening for an event to determine if the DOM is ready for use. It can make sense to use async to run the code earlier if the JavaScript is essential to the content and benefits from being able to start tasks before the DOM is ready, such as retrieving important content or checking if there's a login session. Always end lines with semicolons (since automatic insertion is poorly designed) and always use const to declare variables, unless they are reassigned in which case they should be declared with let but never use var as it is effectively broken. Try to prefer loops with for..of . JavaScript must pass verification with jshint using the following .jslintrc configuration:

{ "browser": true, "module": true, "curly": true, "devel": true, "esversion": 6, "freeze": true, "futurehostile": true, "strict": "global", "varstmt": true }

Cookies are only used for login sessions. The only other use case considered valid would be optimizing HTTP/2 Server Push but the intention is only to use that for render blocking CSS and it's not really worth optimizing for caching when the CSS is tiny in practice. Every cookie must have the __Host prefix to guarantee that it has the Secure attribute and Path=/ . The HttpOnly and SameSite=Strict flags should also always be included. These kinds of cookies can provide secure login sessions in browsers with fully working SameSite=Strict support. However, CSRF tokens should still be used for the near future in case there are browser issues.

For web content, use dashes as user-facing word separators rather than underscores. Page titles should follow the scheme "Page | Directory | Higher-level directory | Site" for usability with a traditional title as the Open Graph title.

HTML must pass verification with validatornu and xmllint . Ensuring that it parses as XML with xmllint catches many common mistakes and typos that are missed by HTML validation due to the ridiculously permissive nature of HTML. This enforces closing every tag, using proper escaping and so on. XHTML does not really exist anymore and we simply use XML parsing as an enforced coding standard and lint pass. It can also be useful to make it compatible with XML-based tooling.

Avoid designing around class inheritance unless it's a rare case where it's an extremely good fit or the language sucks (Java) and it's the least bad approach, but still try to avoid it.

Use concise but self-explanatory variable names. Prefer communicating information via naming rather than using comments whenever possible. Don't name variables i , j , k , etc. like C programmers. It's okay to use things like x and y for parameters if the function is genuinely that generic and operates on arbitrary values. In general, try to scope variables into the most limited scope (in C or C++, be careful about this when references are taken).

Write code that's clean and self-explanatory. Use comments to explain or justify non-obvious things, but try to avoid needing them in the first place. In most cases, they should just be communicating non-local information such as explaining why an invariant is true based on the code elsewhere (consider a runtime check to make sure it's true, or an assertion if performance would be an issue). Docstrings at the top of top-level functions, modules, etc. are a different story and shouldn't be avoided.

Make extensive usage of well designed standard library modules. For apps, treat Jetpack (androidx) as part of the standard library and make good use of it. For Java, Guava can also be treated as part of the standard library.

Libraries outside of the standard library should be used very cautiously. They should be well maintained, stable, well tested and widely used. Libraries implemented with memory unsafe languages should generally be avoided (one exception: SQLite).

Generally, frameworks and libraries existing solely to provide different paradigms and coding patterns are to be avoided. They increase barrier to entry for developers, generally only increase complexity unless used at very large scales (and may not even make things simpler in those cases) and come and go as fads. This is only okay when it's part of the standard libraries or libraries that are considered standard (androidx, Guava) by GrapheneOS and should still be approached cautiously. Only use it if it truly makes the correct approach simpler. Ignore fads and figure out if it actually makes sense to use, otherwise just stick to the old fashioned way if the fancy alternatives aren't genuinely better.