DBcloudbin overview

  1. Home
  2. Docs
  3. DBcloudbin overview
  4. Benefits

Benefits

The list of benefits of implementing DBcloudbin is very compelling:

  • Immediate implementation. Installation is automated with a simple wizard that can be executed in minutes.
  • Application transparency. The DBcloudbin data layer is done in such a way that the application works with no modifications, no matter if the content is at the database or at the Cloud. If the latter, the content is fetched by DBcloudbin agent transparently and injected into the query so the application has the sensation that the content is local. This is done with minimal increased latency.
  • Infrastructure cost savings. Binary content is in general much larger than relational data; moving this content outside the database leads to very significant database size reduction (up to 80% or 90%), so the expensive database infrastructure (large servers and fast storage) is significantly reduced as well.
  • Simplified backup. Database backup gets simplified and reduced as well. The regular backup is much smaller and, as such, the backup and restore time. Content stored in the object store can be intrinsically protected due to DBcloudbin design (objects are never updated, but versioned) and replication mechanisms.
  • Improved performance. Binary data, as documents, are typically queried in human intervention scenarios (e.g. a person through the application interface pressing a button or link to read that document). These scenarios have a negligible impact on slightly higher latency reads. However, complex queries with sorts, filters and joins that any non-trivial application has to execute, gets very positively impacted by a smaller database. So, when we reduce in 80% or 90% our database size, the gain in performance is remarkable. By the other hand, writes (inserts, updates) with a DBcloudbin implemented database goes always to the database in first instance (the data transfer to Cloud is a batch, decoupled operation) so there is no difference in this scenario.
  • Big-data analytics. In many cases we would like to do advanced analytics in a large dataset of contents; if those datasets are in our enterprise, business-critical DB, we have a serious limitation due to the potential performance impact. With the contents in a large and scalable Object Store we have the ability to freely analyze them without impacting the business-critical DB and with no need to duplicate the content (with the potential inconsistencies that may derive).
  • Database migration. A database migration project (e.g. major version migration, HW re-platforming) is very complex, increasing exponentially with the database size; DB unavailability during migration is also very dependent on size. So, if we can move the vast majority of content while the application is online, the maintenance window can be reduced significantly.
Was this article helpful to you? Yes No

How can we help?