2nd Annual PostgreSQL PGDay Event In Bengaluru, India On February 26th, 2016

2nd Annual PostgreSQL PGDay Event In Bengaluru, India On February 26th, 2016

The community is now back with the 2nd PGDay event. It’s happening in Bengaluru: India’s Silicon Valley again. There’s also a spanking new site for the event and we expect increased participation.

With “Robert Haas” doing the keynote and with a couple of sponsors already committed, it looks like it’s going to be a fun-fest geeky event this time around as well!

Folks can RSVP here and details for call for papers is out as well.

Please spread the good word, submit entries for papers and please please attend in large numbers! This awesome open source database deserves an awesome audience, doesn’t it!

See you all in February, 2016!

2nd annual PostgreSQL PGDay event in Bengaluru, India on February 26th, 2016
Securing APIs Using SecureDB Encrypted Identity Manager

Securing APIs Using SecureDB Encrypted Identity Manager

So, you are building a new RESTful APIs and want your customers to build against these APIs. Or you have an appliance that you will ship to customer location which will call your APIs routinely.

In both of these scenarios, you want your customers (who are API clients) to authenticate before invoking protected APIs. Plus you also want to ensure that proper access control (authorization) is in place to ensure that the API clients can only call APIs they are authorized to invoke. In case of multi-tenant APIs, you also want to ensure that data is completely segregated.

SecureDB’s Encrypted IdM can act as Identity server for your APIs, saving you time and bringing in efficiency. This post delves into how to protect your APIs.

Using SecureDB Encrypted IdM to protect APIs

API Client is a RESTful API client written to consume Your REST APIs. With SecureDB (Encrypted) Identity Manager (IdM) managing your API keys and other customer information Your REST APIs calls SecureDB (Encrypted) Identity Manager (IdM) to authenticate the API Client or to make authorization decisions.

Create API Keys

For every new customer, you may issue one or more API Keys (that is API Key and API Secret combination). This can be done by simply calling the /quickregister API as shown below.

Note: The URL will be different if you are using SecureDB Enterprise, our on-prem solution.

In the snippet above, we just created API Key and API Secret in SecureDB’s Encrypted IdM. You can pass this to your customer in a secure way. In a real life scenario, you may have a portal that let’s your customers download their API Key and API Secret. In that case, your portal would call the /quickregister API to create credentials. This API’s response has the UUID of the API Key:

Client Authentication

Let’s assume that your APIs require your API clients to authenticate. You could ask your API clients to send API Key and API Secret combination during every call. If that’s the case, from your API code, simply pass the API Key and API Secret to be validated against SecureDB’s Encrypted IdM.

If the authentication is successful, SecureDB Encrypted IdM will send back 200 OK response. This will be your indication that you can serve the client’s request.

Instead of asking your API client to send API Key and API Secret with every request, you could ask the client to send pair only the first time. You could use the above API to authenticate the request, but instead return a JSON Web Token (JWT) to your API client. SecureDB Encrypted IdM supports JWT and can be used to authenticate subsequent API requests.

To make SecureDB Encrypted IdM return JWT upon authentication, simply call the /authentication method without the Authorization header. The JWT is returnd by SecureDB Encrypted IdM as part of response header:

You can pass this JWT to the API client and expect the client to send this token as part of every request. By default, this JWT is valid for 60 minutes (this time is configurable). Now, when your API client sends this JWT back to you, you can check the validity of the JWT yourself. To do this, you need to be able to make sure that the JWT was not tampered by the API client. In order to validate, you need to get the secret key that was used to construct the JWT signature. The secret key would act as a shared secret between your APIs and SecureDB Encrypted IdM.

Client Authorization

Now, let’s add authorization (access control) into the mix. SecureDB Encrypted IdM supports Roles and hence can aid you in implementing effective Role Based Access Control (RBAC).
Let’s now add a role to the API Key created earlier:

Now the API Key you issued earlier has a Role associated called admin. Now, let’s assume you want a certain API of yours to be accessible only by customer API Keys that have the Role called admin. You’ll be glad to know that SecureDB Encrypted IdM puts the Role into the JWT payload. So, after you validate the incoming JWT, you can pull the Role from JWT and make your authorization decision.

Lock the API Key

Now, let’s say one of your API client is misbehaving. Or hasn’t paid you money and you want to lock the API Key temporarily. This is easy too:

All the subsequent authentication calls will fail until you unlock your customer’s API key. Similarly you can delete the API Key too. For a full list of APIs available, click here. To try the APIs, use our API Playground tool.

Summary

In short, SecureDB Encrypted IdM can not only be used as a secure Identity Manager for your Web applications. It can also form the back bone of your API strategy. Your APIs have the same Identity Management requirements as your Web applications. This is where SecureDB Encrypted IdM shines.

Securing APIs Using SecureDB Encrypted Identity Manager
Multi-Tenant SaaS With PostgreSQL

Multi-Tenant SaaS With PostgreSQL

So you have a multi-tenant SaaS application that is using PostgreSQL as the database of choice. As you are serving multiple customers, how do you protect each customer’s data? How do you provide full data isolation (logical and physical) between different customers? How do you minimize impact of attack vectors such as SQL Injection? How do you retain the flexibility to potentially move the customer to a higher hosting tier or higher SLAs?

1. One DB per customer

Instead of putting every customer’s data in one database, simply create one database per customer. This allows for physical isolation of data within your Postgres cluster. So, for every new customer that registers, do this as part of the workflow:

In the example above customer_template_v1 is a custom database template with all the tables, schemas, procedures pre-created.

Note: You can use Schema or Row Level Security (v9.5) to effect isolation. However, Schema and Row Level Security would only allow for logical isolation. You could go the other extreme and use a DB cluster (as opposed to a database) per customer to effect complete data isolation. But the management overhead makes it a less than ideal option in most cases.

2. Separate DB user(s) per customer

After the Database is created as mentioned above, create a unique Database user as well. This user only would have permission to one (and only one) database: customer_A.

Now, in your middleware code, make sure to connect to customer_A database only using customer_A_user. In other words, when a user from customer_A organization logs into your SaaS application, use appropriate database and database user name.

If you wish, you can even create separate READ and WRITE users. So, to create a read user for database: customer_A

With the above you have fine grained control in terms of database access privileges and every activity from the middleware needs to decide carefully as to which role (read or read/write) needs to be used for access.

So, what DB User/Role do you use to create the new customer database in the first place? Create a special DB User (say create_db_user) just for this purpose. Audit and monitor this user’s activity closely. Don’t use this DB User for anything else. Or you can create a new user for each new database and simply specify that at database creation time. Whatever happens, don’t use the Postgres root user for your web connections!

As you may have noticed, a number of SaaS applications give vanity URLs (example: https://customerA.example.com) to their customers. Some other SaaS applications have a concept of ‘customerId’ which is a required field for authentication into SaaS application. The benefit is two fold:

  1. As the user logs into the SaaS application, the middleware code knows exactly which database to connect to.
  2. This also helps to keep the URL space isolated, allowing the SaaS application to start isolation at the web server level itself.

3. Separate crypto keys per customer

If you are doing any encryption within the database (say with pgcrypto), make sure to use separate encryption keys for each customer. This adds cryptographic isolation between your customer data. Finally, when it comes to encryption and key management, avoid these common encryption errors developers keep making.

Comment and do let us know what other best practices make sense for multi-tenant SaaS access with PostgreSQL.

Multi-Tenant SaaS With PostgreSQL
Secure REST APIs From Common Attack Vectors

Secure REST APIs From Common Attack Vectors

1. Rate Limit every API

Your APIs are either public or protected/private. Irrespective of whether hey are public or private, they need to be Rate Limited. Let’s consider a couple of scenarios why this is needed:

An Example

Let’s consider an API that accepts an email address and checks its internal database to see if the email address is a member. If the email is a member, then it sends an automated email and returns 200 OK. If the email is not a member, it does not send an email and returns 404 Not Found status.

An attacker can exploit this situation in two ways:

  1. If the attacker knows email address of a valid user, he would simply call this API in a loop thousands or millions of times. This floods your users inbox, increases your email service (sendgrid, SES) bills.
  2. If the attacker does not know a valid email address in the system, he could continue to guess until he finds one (basically an Enumeration attack.

Solution

Rate limiting will alleviate this. You allow only certain number of API calls per minute (or per hour) from a specific IP address. Once the rate is exceeded, then your API should return 429 Too many requests.

Implementation of this solution can vary based on your stack. At the simplest level, this could simply be a per-API per-hour hashmap with client IP address as key and number of hits as value. At the top of every hour, simply clear out hashmap. Or you can depend on your web server. Apache has mod_ratelimit and Nginx has ngx_http_limit_req_module.

2. Validate every input

RESTful APIs accept input either via URL GET request parameters or via request body in case of POST/PUT or via HTTP headers.  These allow opportunity for the attacker to inject scripts into your API tier.

A few guidelines

 

  1. Treat every single input coming from API client to be untrusted data. Even if it is the data sent by your API to the client in the previous API call.
  2. While validating inputs, prefer whitelists over blacklists.
  3. Strongly type every input parameter coming into your API.
  4. Look for validation libraries in your platform. Google for regex patterns that are readily available for most common inputs. Building your own input validation framework should be your last option.
  5. Test, test and test some more.

3. Use a WAF

Your REST APIs is a web application. Put it behind a Web Application Firewall (WAF). Though WAF will not solve all your security needs, it gets you closer. Plus it blocks a ton of annoying and malicious web requests (unwanted crawlers, port scanners etc.) reducing distractions

4. Restrict who can access the APIs

 

Based on who your API’s customers are, the following advanced options may be viable to you.

 

  • IP Based Filtering: If your customers are calling you from their server side, you may be able to insist that they call you from a static IP address. You only allow known IP addresses and block everything else. Any basic firewall will let you do this.
  • VPN: Insist on a VPN tunnel between API client and your API server.
  • Certificate Based Authentication: Enable Certificate Based Authentication for your APIs. No client without the certificate will be able to call your APIs.