Mobile phones are widely used across the world, hence security threats are also widely spread. A mobile attack can be possible in any of the following layers

The Device The Network The Data center

This document drives advance android app security and guides the developers with specific recommendations during development.

C ode Practices

Obfuscate: Reverse engineering apps can provide valuable insight into how your app works. Making your app more complex internally makes it more difficult for the attackers to see how the app operates, which reduce the number of attack vectors. Obfuscate the code to make it more difficult for a malicious user to examine the inner working of the app.

Implement anti-debug technique: An attacker might find a way around the debugging restrictions in order to attack the application on a low level. By preventing a debugger from attaching, an attacker’s ability to interfere with low-level runtime is limited.

By default the flag BuildConfig.DEBUG is set true when the app is running in the IDE. It is set false in the released APK. To explicitly set BuildConfig.DEBUG edit the build.gradle for the app module.

release {

debuggable true

}

debug {

debuggable false

}

If you set debuggable true explicitly, don’t forget to remove this change before release build is published or set to false to check that a release build is not unnecessarily producing log output, or displaying sensitive data which might cause a serious issue.

Test Third-Party Libraries: Developers rely heavily on third-party libraries. It is important to thoroughly examine and test this as you test your code. Third-party libraries can contain vulnerabilities and weakness. This should include core android libraries too.

Securely Store Sensitive Data: Android stores an application in memory (even after use) until the memory is reclaimed, encryption keys may remain in memory. An attacker who finds or steals the device can attach a debugger and dump the memory from the application.

Possible solutions

1. Do not keep sensitive data (e.g., encryption keys) in memory longer than required.

2. Nullify any variables that hold keys after use.

3. Avoid using immutable objects for sensitive keys or passwords such as in Android java.lang.String and use char array instead. Even if references to immutable objects are removed or nulled, they may remain in memory until garbage collection occurs (which cannot be forced by the app).

4. If storing sensitive data on the device is an application requirement, you should add an additional layer of verified, third-party encryption to the data as device encryption is not sufficient. By adding another layer of encryption, you have more control over the implementation and attacks focused on the main OS encryption classes. Some options includes.

Encrypting sensitive values in an SQLite database using SQLCipher, which encrypts the entire database using a PRAGMA key

The PRAGMA key can be generated at runtime when the user initially installs the app or launches it for the first time

Generate a unique PRAGMA key for each user and device

The source for key generation should have sufficient entropy (i.e., avoid generating key material from easily predictable data such as username)

Secure Deletion Of Data: On Android, calling file.delete() will not securely erase the target file. Traditional approaches to wipe a file generally do not work on mobile devices due to the aggressive management of the NAND Flash memory. Operate under the assumption that any data written to a device can be recovered.

Possible solutions

Avoid storing sensitive data on the device. If storing then encrypting the sensitive data stored in files. Rewriting the contents of the file and syncing before deleting can help.

Avoid Query String for Sensitive Data: Query string parameters are more visible and can often be unexpectedly cached (web history, web server or proxy logs).

Possible solutions

Using an unencrypted query string for meaningful data should be avoided. Whether POST or GET, temporary session cookies should be used. Query string data can be encrypted using a temporary session key negotiated between hosts using secure algorithms.

Implement Anti-tamper Techniques: Attackers can tamper with or install a backdoor on an app, re-sign it and publish the malicious version to third-party app marketplaces.

Possible solutions

Use checksums, digital signatures and other validation mechanisms to help detect file tampering. When an attacker attempts to manipulate the application, the correct checksum would not be preserved and this could detect and prevent illegitimate execution. Note that such techniques are not foolproof and can be bypassed by a sufficiently motivated attacker. Checksum, digital signature and other validation techniques increase the amount of time and effort an attacker must spend to successfully breach the application. An application can silently wipe its user data, keys, or other important data wherever tampering is detected to further challenge an attacker. Applications that have detected tampering can also notify an administrator. See this.

Use SECURE Setting For Cookies: If a cookie is not marked as “Secure,” it may be transmitted over an insecure connection whether or not the session with the host is secure. In other words, it may be be transmitted over an HTTP connection.

Possible solutions

The Set-Cookie headers should use the “Secure” and “HTTPOnly” settings. These settings should be applied to all cookies for native and/or web apps. In android we have methods to set these settings in HttpCookie(see HttpCookie).

public void setSecure (boolean flag) boolean: If true , the cookie can only be sent over a secure protocol like HTTPS. If false , it can be sent over any protocol. public void setHttpOnly (boolean httpOnly) Indicates whether the cookie should be considered HTTP Only. If set to true it means the cookie should not be accessible to scripting engines like javascript. boolean: if true make the cookie HTTP only, i.e. only visible as part of an HTTP request.

Fully Validate SSL/TLS: An application not properly validating its connection to the server is susceptible to a man-in-the-middle attack by a privileged network attacker. This means that an attacker would be able to capture, view, and modify traffic sent and received between the application and the server.

Common mistakes made by developer

Developers may disable certificate validation in apps for a variety of reasons. One example is when a developer needs to test code on the production server, but does not have a domain certificate for the test environment. In this situation, the developer may add code to the networking library to accept all certificates as valid. Accepting all certificates as valid, however, allows an attacker to execute an MITM attack on the app by simply using a self-signed certificate. Another common developer mistake in the implementation of SSL/TLS is setting a permissive hostname verifier. If an app “allows all hostnames” a certificate issued by any valid certificate authority (CA) for any domain name can be used to execute an MITM attack and sign traffic.

Possible solutions

For any app that handles highly sensitive data, use certificate pinning to protect against MITM attacks. The majority of apps have defined locations to which they connect (their backend servers) and inherently trust the infrastructure to which they connect, therefore it’s acceptable (and often more secure) to use a “private” public-key infrastructure, separate from public certificate authorities. With this approach, an attacker needs the private keys from the server side to perform a MITM attack against a device for which they do not have physical access. If certificate pinning cannot be implemented for any app functionality that handles highly sensitive data, implement proper certificate validation, which consists of two parts:

Certificate validation: Certificates presented to the app must be fully validated by the app and be signed by a trusted root CA.

Certificates presented to the app must be fully validated by the app and be signed by a trusted root CA. Hostname validation: The app must check and verify that the hostname (Common Name or CN) extracted from the certificate matches that of the host with which the app intends to communicate.

From Android 7 (API level 24) developers can maintain trust with a custom CA throughout the entire app or specific domains using the Network Security Configuration file. Developers can also use the Network Security Configuration file for certificate pinning (see Android documentation about the Network Security Configuration feature).

Protect Against SSL Downgrade Attacks: When your app communicates with servers using cleartext network traffic, such as HTTP, the traffic risks being eavesdropped upon and tampered with by third parties. This may leak information about your users and open your app up to injection of unauthorized content or exploits. Ideally, your app should use secure traffic only, such as by using HTTPS instead of HTTP.

An attacker can bypass SSL/TLS by transparently hijacking HTTP traffic on a network, monitoring for HTTPS requests, and then eliminating SSL/TLS, which creates an unsecured connection between the client and server. This attack can be particularly difficult to prevent on mobile web apps.

Possible solutions

A mitigation recently put in place within Android is to treat non-TLS/plaintext traffic as a developer error. Android 6.0 Marshmallow (API Level 23) introduced a feature which allow app developers to address the cleartext traffic risks.

To protect the your app against risks to cleartext traffic, declare android:usesCleartextTraffic=”false” attribute on the application element in your app’s AndroidManifest.xml. This declares that the app is not supposed to use cleartext network traffic and block cleartext traffic in the app. For example, if your app accidentally attempts to sign in the user via a cleartext HTTP request, the request will be blocked and the user’s identity and password will not leak to the network. (Note: If not declared, the default value is TRUE. All apps behave as before: there are no default restrictions in using HTTP)

From android 9 the isCleartextTrafficPermitted() method returns false by default. If your app needs to enable cleartext for specific domains, you must explicitly set cleartextTrafficPermitted to true for those domains in your app's Network Security Configuration.

Establish Local Session Timeout: Mobile devices are frequently lost or stolen, and an attacker can take advantage of an active session to access sensitive data, execute transactions, or perform reconnaissance on a device owner’s accounts. In addition, without a proper session timeout, an app may be susceptible to data interception via a man-in-the-middle attack.

Possible solutions

Any time the app is not used for more than 5 minutes, terminate the active session, redirect the user to the log-in screen, ensure that no app data is visible, and require the user to re-enter log-in credentials to access the app. After timeout, also discard and clear all memory associated with user data including any master keys use to decrypt that data. Also, make sure the session timeout occurs on both the client side and the server side to mitigate against an attacker modifying the local timeout mechanism.

Validate content provider: Content providers allow other applications on a device to request and share data. If sensitive information is accidentally leaked in one of these content providers, all an attacker needs to do is to call the content provider and the sensitive data will be exposed to the attacker by the application.

Possible solutions

Specify the flags ‘readPermissions‘ and ‘writePermissions‘ which restrict who can read or write from the provider. Another solution could be to remove the flag exported or set it to false, if we don't want to share the records stored with third party apps. [Note: The default exported value for Content Providers is true. Because the primary purpose of a Content Provider is to share information between apps, it is assumed that these should be public and accessible by other apps.]

<provider android:name="NotesProvider" android:authorities="com.example.MyApplication.NotesProvider"

android:exported=["true" | "false"]

android:readPermission="string"

android:writePermission="string"/>

Avoid Storing App Data in Backups: Performing a backup of the data on an android device can potentially also back-up sensitive information stored within an app’s private directory.

Possible solutions

By default, the allowBackup flag within an Android app’s Manifest file is set as true . This results in an Android backup file (backup.ab) including all of subdirectories and files contained within an app’s private directory on the device’s file system. Therefore, explicitly declare the allowBackup flag as false .

Manage Debug Logs: Debug logs are generally designed to be used to detect and correct flaws in an application. These logs can leak sensitive information that may help an attacker create a more powerful attack.

The Android system log typically used by apps for outputting debug messages is a circular buffer of a few kilobytes stored in memory. It may also be possible to recover debug logs from the filesystem in the event of a kernel panic

Possible solutions

Use ProGuard or DexGuard to completely remove the method calls to the Log class in release builds, thus stripping all the calls to Log.d, Log.i, Log.v, Log.e methods.

-assumenosideeffects class android.util.Log {

public static *** v(...);

public static *** d(...);

public static *** i(...);

public static *** w(...);

public static *** e(...);

}

Remove public access of component: Components accessed via Intents can be public or private. The default is dependent on the intent-filter and it is easy to mistakenly allow the component to be or become public. Public components declared in the Manifest are by default open so any application can access them.

Possible solutions

If a component does not need to be accessed by all other apps set component as android:exported=false in the app’s Manifest.

Avoid Intent Sniffing: When another application initiates activity by sending a broadcast intent, malicious apps can read the data included in the intent. The malicious app can also read a list of recent intents for an application. For example, if an app invokes and passes a URL to the Android web browser, an attacker could sniff that URL.

Possible solutions

Do not pass sensitive data between apps using broadcast intents. Instead, use explicit intents.

Follow webview best practices: Webview consumes web content that can include HTML and JavaScript, improper use can introduce common web security issues such as cross-site-scripting (JavaScript injection). WebViews can introduce a number of security concerns and should be implemented carefully. In particular, a number of exploitable vulnerabilities arising from the use of the addJavscriptInterface API have been discovered.

Possible solutions

Disable JavaScript and Plugin support if they are not needed. While both are disabled by default, best practices call for explicitly setting them as disabled. Disable local file access. This restricts access to the app’s resource and asset directory and mitigates against an attack from a web page which seeks to gain access to other locally accessible files. Disallow the loading of content from third-party hosts. This can be difficult to achieve from within the app. However, a developer can override shouldOverrideUrlLoading and shouldInterceptRequest to intercept, inspect, and validate most requests initiated from within a WebView. A developer may also consider implementing a whitelist scheme by using the URI class to inspect components of a URI to ensure it matches an entry within a list of approved resources.

// Domain whitelisting @Override

public boolean shouldOverrideUrlLoading(WebView view, String url) {

if (!Uri.parse(url).getHost().equals("www.oreilly.com")) {

return false;

}

view.loadUrl(url);

return true;

}

See this for a snippet of sample code that includes some WebView security best practices.

Note that WebView does not honor the Android Manifest flag android:usesCleartextTraffic which can help prevent an app from using cleartext network traffic (e.g., HTTP and FTP without TLS).

Sign Android APKs: APKs should be signed correctly with a non-expired certificate.

Sign a production app with a production certificate, not a debug certificate

Make sure the certificate includes a sufficient validity period (i.e., won’t expire during the expected lifespan of the app)

Google recommends that your certificate use at least 2048-bit encryption

Make sure the keystore containing the signing key is properly protected

Also, restrict access to the keystore to only those people that absolutely require it

Follow these security recommendations to make your android application strong enough to withstand security attacks by malicious users.

Refer these links for full insight about security threats: