Completely serverless

Going serverless was one of the key design decisions made early on as we determined it would lead to the most minimal, simple solution by offloading responsibility onto our cloud provider (AWS) and building on the reliability and robustness of fully-managed cloud services.

Furthermore, our goal is to completely decentralize our entire software stack as decentralized technologies mature. We determined a completely serverless architecture would put us in the best position for this transition, and allow us to migrate in steps as our architecture is much more modular and loosely coupled than a traditional server-based stack.

Our entire backend/cloud infrastructure is defined as CloudFormation ( infrastructure as code) via the Serverless framework. This allows us to conduct thorough code reviews (another of our guiding principles) and benefit from source control. It’s hard to even comprehend the traditional route of managing cloud servers and manually provisioning cloud infrastructure/resources. Let alone not having a centralized dashboard where we can view all infrastructure/resources (CloudFormation stacks).

Our backend/cloud infrastructure is composed of a few services (in CloudFormation terminology, templates) that interact with each other to support our frontend. Our frontend’s goal is simply to facilitate the interaction between our users and smart contracts at a reasonable level of abstraction.

We will open source our smart contracts when we launch to be as transparent as possible and run effective bug bounty programs. Because our frontend/backend doesn’t add significant functionality to our platform, we are keeping this source code private for the time-being. We don’t see value in open sourcing this part of our software stack, only risk in that potential attackers have increased visibility into the internals of our platform.

It should also be noted that our frontend/backend run on centralized (AWS) infrastructure. It is important to stress all significant functionality occurs on-chain (decentralized infrastructure). And that these decisions and tradeoffs were carefully considered. We plan to evolve our software stack over time (ie transitioning to IPFS / other decentralized infrastructure when the pros outweigh the cons).

The following sections describe each component of our backend in detail.

Authentication

Guests are able to freely browse our ÐApp without logging in. To actually invest or submit a property for-sale, logging in is required. We partner with Civic and uPort for secure login and registration without the need for usernames or passwords. Each of our users is uniquely identified by blockimmo via their Civic and/or uPort identity.

The general registration/login flow is described here (Civic), and here (uPort). From a user’s perspective it’s an easy, quick process that is prompted and then completed via app. Upon authorizing this login request in-app (Civic/uPort), a Lambda function in our backend is triggered via POST request (API Gateway) with the user’s encrypted data. The data is decrypted with blockimmo’s secret key(s) (securely managed via Secrets manager), and temporary credentials corresponding to this user’s Identity are generated and returned for use by client-side code running in the user’s browser. The user is now authenticated.

Identities

Each of our users is assigned a federated identity in our Cognito identity pool the first time they are authenticated (guests that haven’t logged in are granted an unauthenticated identity). Identities are used to generate temporary credentials with specific Permissions attached to them. As our ÐApp is completely serverless, most code runs client-side in the user’s browser. These temporary credentials provide this client-side code fine-grained, least-privileged access control to specific resources/services in our backend.

These credentials are the only state our ÐApp stores in the user’s browser’s local storage. It should also be noted our google analytics and intercom plug-ins store cookies.

Permissions

We define two separate Identity and Access Management (IAM) Roles, one mapped to unauthenticated identities and another to authenticated identities. These roles are attached to the temporary credentials generated with a given identity, and enable interaction with specific resources/services in our backend. Unauthenticated users are granted read-only permission to certain resources/services, enabling them to browse our platform freely as a guest. Authenticated users are granted additional permissions (ie an authenticated user is able to read notifications directed towards themselves, and modify the description of a property listed for-sale by themselves). These roles/permissions strictly enforce least-privileged access control.

A user is not able to just list a property for-sale on our frontend (although they are able to submit a request). This requires prior approval by blockimmo and trying to do so will result in an access denied error. This prevents potential attackers from listing a legitimate looking, but fake, property and collecting funds. Any properties displayed on our frontend have been approved, verified, and vetted by blockimmo.

Storage

All persistent data is stored in dynamodb tables and s3 buckets. Small records with low latency access requirements are stored in dynamodb tables, which automatically scale their read/write capacity based on traffic. Large objects are stored in s3, and often referenced by dynamodb. Dynamodb has significant benefits over a traditional (SQL) database for our use-case (ie it’s fully managed, serverless, and an extremely simple key-value store which perfectly fits our needs).

Data is organized in specific tables/buckets as described in the sections below. Each dynamodb table is accessed via GraphQL API (AppSync). GraphQL has significant benefits over a traditional REST API for our use-case (ie the surface area of attack is significantly reduced because GraphQL is strongly-typed, permissions can be strictly enforced in the resolver of a GraphQL API, and network latency is reduced. All while simplifying client-side code).

KYC

Our legal framework requires the completion of certain KYC (know your customer) and AML (anti money laundering) checks before users may invest according to Swiss (and international) laws and regulations. We partner with the Swiss identity verification platform Intrum (IDnow) to ensure the best possible service for our users.

Users

All user data is stored in a Users dynamodb table. The only resource with permission to access this table is the Kyc lambda function, which mutates user data upon a user completing KYC/AML. The Authentication lambda function has permission to query a single attribute in this Users table, enabling it to determine a given user’s level of completed KYC/AML at login.

Listings

Information about properties for-sale on the blockimmo platform is stored in a Listings dynamodb table. This table is read-only for all users, whether authenticated or unauthenticated . The seller of a property has additional permissions to mutate certain attributes of a listing. This is enforced in the resolver of the GraphQL API (ie TokenSaleAddress may only be mutated by blockimmo, but the description of a property may be mutated by the seller).

Orders

0x is used to allow investors to freely sell/buy tokens of property for Ether. A order book is maintained that contains all offers and allows buyers to browse these offers and complete trades.

Events

Real-time notifications are provided for our users with Apollo GraphQL / AWS AppSync subscriptions. The visibility of these notifications is tied to the user’s underlying Cognito identity, so that users only have read access to their own notifications (ie notifications are directed towards property sellers, and are triggered when users purchase shares of a propety).

Logs

The services/resources in our backend generate thorough, structured logs to provide tracing capabilities and visibility. Every resource is granted permission to interact with other resources on a least-privileged basis. Logs is an IAM role that grants these services permission to generate these logs. We utilize AWS CloudTrail for improved visibility into these logs.

Backup

All data is backed-up in an s3 bucket incase it is required in the future for any reason. Any time data changes, a snapshot of it is delivered to the s3 bucket via a Firehose stream.

Cloudfront

Static files and assets are stored in an s3 bucket, and served via a cloudfront content delivery network (CDN). The s3 bucket where these files live is accessible only by this specific CDN, and any other request to it will fail with a 403 Forbidden error. The use of a CDN to serve content has many benefits, including securely deliver data with low latency and high transfer speeds globally. Cloudfront also integrates seamlessly with other services we utilize, like AWS shield for managed DDOS protection, and our web application firewall (WAF) described in the next section.

WAF

We utilize a web access firewall (WAF) in front of our CDN, described here. It includes a honey pot, SQL injection and cross-site scripting protection, log parsing to identity suspicious behavior, hourly third-party IP reputation list checks for malicious addresses to add to a block list, and HTTP flood protection.

A bad bot 🤖… but we ❤️ good bots (SEO)

Alarms

We watch various metrics /services and have CloudWatch alarms configured to email and text the appropriate people at blockimmo via SNS. This is critical for fast response times to certain events.

Deployment

Each service is deployed by a blockimmo admin with serverless deploy -v . The admin must have the proper IAM role to complete this process. Keeping these admin credentials secure is blockimmo’s responsibility which we take extremely seriously.

Penetration test findings

Two medium level vulnerabilities were found during the pen test and described in the vulnerability report provided to blockimmo by Hosho. The first vulnerability was a missing Content Security Policy (CSP), and the second was related to missing security headers. Resolving these issues was an extremely simple fix with Lambda@Edge, adding the necessary CSP and security headers to our Origin Response as exactly described in here: