Your data is securely stored in your database and we want to ensure that it keeps secure when offloading to the cloud. We have designed our solution in order to meet stringent security criteria and ensure only authorized entities can access the data. In addition, we have defined a very strict process in the information life cycle in order to ensure data integrity. You have additional information on our data integrity measures here.

From a security standpoint our design criteria is tight integration with the database security model. This ensures a predictable security schema where the security principles of your application are not broken by the fact that a new cloud object store is added to the equation.

Going through the details:

  • Database schemas: we only create one additional schema and user (dbcloudbin) where only configuration information and a shared secret mechanism is stored. The only external entity that is required to connect to the database with this user is the DBcloudbin agent. The user credentials are explicitly defined at setup (so you can ensure they follow your organization specific security policies) and stored encrypted. In any case, this user has minimal grants and, very important, has no access to any actual application data schema.
  • Shared secret: the DBcloudbin agent requires authentication credentials for any BLOB read operations. Those shared secrets are securely generated and stored at the database so, only the DBcloudbin agent and the database schema owner has access to read it. In any case, the DBcloudbin agent only serve data on requests that has provided a correct BLOB identifier; this identifier is stored in the application owner schema and basically imposible to generate by brute force.
  • Encrypted communications: Communications between the DBcloudbin agent and our Cloud Object Store are ciphered using https protocol, so any interception will not allow to read clear data. In any case, the payload data is also ciphered from the source, so there is a double and independent encryption. By default, communications between the database server and the DBcloudbin agent are using http, not https. This is due to the fact that in most cases those communications are intra-host or at least intra-LAN (since the DB and the DBcloudbin agent are frequently installed in the same server) and this security risk does not overcome the additional setup complexity (it would require install SSL certificates in Oracle database to trust the communications); in any case, if you need this additional secure SSL channel, it is possible to configure it; just file a support request and we will send you the detailed instructions.
  • Encrypted data: We take very seriously your data privacy. When you move data from your database to our Cloud Object Store, your data is encrypted using a secure AES algorithm and a randomly generated password that is stored only at your database application schema. So the data we receive in our datacenter is completely obscured. Only your database is able to get the information in clear using the stored key. So, it is very important to continue with a convenient backup policy in your database; if you loose your database and have no backup, you loose practical access to the data stored in our Cloud Object Store (we can provide you the data by you cannot get it in clear). The good thing is that this backup is much easier and cheaper due to the much smaller database size. This behavior is configurable per schema or even per table (you can disable the data encryption if you prefer to have the data stored in clear at our Cloud Object Store).
  • CLI operations: All the CLI operations (archive, restore, clean…) that move actual data to the DBcloudbin Cloud Object Store, requires database schema credentials (the credentials of the database user the data belongs to). This way, we ensure only a user with those credentials can execute those operations. Those credentials are stored for convenience in a user profile specific file in the server where we run the CLI (this way you do not have to specify the credentials in every command); they are stored encrypted, but for additional security you can delete them through the CLI once executed the desired commands (e.g. in a schedule task script, you can set the credentials, execute the archive/restore command(s) and delete the credentials).