One of the main advantages of storing binary data in a relational database is the transactional consistency that any enterprise database provides. So we ensure that modifications or deletions can be confirmed or rolled-back in a consistent transaction. Extracting the binary data from the database opens a series of risk scenarios that have to be carefully managed to avoid data integrity issues.

We have designed our DBcloudbin solution with this concepts in mind. In addition, we have to avoid any potential data-loss and data-unavailability, no matter what complex scenario we may have.

 The four checks for cleaning binary content at your database

Moving data from your database to the cloud is a operation that has to be executed with care, to avoid data unavailability (and  of course, data loss). We have designed a very cautious process that follows an strict 4 test steps before cleaning data from your database:

  1. You have explicitly requested. We only move the content that you explicitly request to be moved to DBcloudbin, using the CLI command “dbcloudbin archive -clean”. In the “-clean” option is not specified, the data is not cleaned at database tier.
  2. Data is successfully stored at DBcloudbin Cloud Object Store. We check that the data is successfully stored at the DBcloudbin Cloud Object Store, in a per-row basis. If something fails when the operation is requested, the transaction is aborted and rolled-back.
  3. Data can be retrieved from the database as your application would read it. This is an important step. We prioritize consistency from performance in the archive process. So, once correctly stored at DBcloudbin Cloud Object Store, we issue a read operation, using a SQL query basically the same way your application would do. If this fails for any reason, the operation is aborted and rolled-back.
  4. Data retrieved from DBcloudbin is exactly the same as the stored in DB. The last check is make sure what we stored is what we read. We issue a binary comparison of the content inside the DB and the content we have read from DBcloudbin. Only if we have successfully passed the previous checks and this last one, the content is cleaned from your database and will be available at DBcloudbin service.

 Transactional consistency

Our solution design is ensuring we don’t break DB transactional consistency in common application operations. We ensure transactional consistency at database level both in normal CRUD operations (insert, read, update, delete) and through archive/restore processes. SQL delete sentences issued by your application on tables with archived BLOBs will be logged. In the case of a transaction rollback, the content is unaltered and the application can keep reading it as ever. If the transaction is committed the object is inaccessible by the application (as it would do if DBcloudbin were not installed) but the content is not physically deleted until you explicitly issue a purge of deleted BLOBs. In that way, you gain an extra level of content backup and a easy way to restore your application content in the case of accidenta deletes, by restoring the accidentally deleted rows from your backup, enabling again the application to fetch this content from DBcloudbin Cloud Object Store.