How Amazon is Achieving Database Freedom Using AWS

When people hear that Amazon is on the verge of concluding an enterprise-level, multiyear initiative to move the company's data from Oracle databases onto Amazon Web Services (AWS), this question might come to mind: Why wasn't the online retail powerhouse, known for its use of leading-edge technologies, already taking advantage of the variety, scale, reliability, and cost-effectiveness of AWS—especially considering that the two are part of the same company?

The first part of the answer is that Amazon was born long before AWS, in an era when monolithic, on-premises database solutions like Oracle still made the most sense for storing and managing data at enterprise scale. And, the second is that—even though that era is now over—there are big obstacles to disengaging from Oracle, as many enterprises that want to shift to AWS know all too well.

“If a company like Amazon can move so many databases used by so many decentralized, globally distributed teams from Oracle to AWS, it's really within the reach of almost any enterprise."

Thomas Park, Senior Manager of Solution Architecture for Consumer Business Data Technologies, Amazon

Untangling from Oracle: Easier Said Than Done

In Amazon's case, obstacles to leaving Oracle included the size of the company's fleet—more than 5,000 databases connected to a variety of non-standardized systems, with ownerships and dependencies that were not centrally inventoried. There were personnel-related risks as well. The careers of many Amazon employees were based on Oracle database platforms. Would they fully support the move? Would some just leave?

Similar challenges face the many other companies that want to switch from Oracle to AWS. Just like those other companies, Amazon had urgent reasons to make it work. Amazon engineers were wasting too much time on complicated and error-prone database administration, provisioning, and capacity planning. The company's steep growth trajectory—and sharply rising throughput—required more and more Oracle database shards, with all the added operations and maintenance overhead those bring. And then there were the costs: business as usual on Oracle would increase the millions of dollars Amazon was already paying for its license: a jaw-dropping 10 percent a year.

"It was the same situation for us as it is for so many enterprises," says Thomas Park, senior manager of solutions architecture for Consumer Business Data Technologies at Amazon.com, who helped lead the migration project. "Oracle was both our biggest reason for, and most significant obstacle against, shifting onto AWS."

That was then. Today, Amazon stands on the verge of completing the migration of about 50 petabytes of data and shutting down the last of those 5,000 Oracle databases. How did the company pull off this massive migration?

Managing Culture Change and Technical Complexity

Amazon faced two key challenges during the migration. One was how to tackle the large-scale program management necessary to motivate its diverse, globally distributed teams to embrace the project and track its progress. The other was the technical complexity of the migration. For the project to be successful, it was clear that the company's business lines would need centralized coordination, education, and technical support.

To overcome these challenges, Amazon began by creating an enterprise Program Management Office (PMO), which set clear performance requirements and established weekly, monthly, and quarterly reviews with each service team to track and report progress and program status.

"In establishing the program we had to clearly define what we were trying to achieve and why, before we addressed the ‘how,’” says Dave George, Amazon’s director of Consumer Business Data Technologies. “Once we established the ‘what’ and the ‘why,’ we established clear goals with active executive sponsorship. This sponsorship ensured that our many distributed tech teams had a clear, unambiguous focus and were committed to deliver these goals. Relentless focus on delivery ensured that disruption to other business priorities was minimized while achieving a significant architectural refresh of core systems.”

Also key to the project's success was an AWS technical core team of experienced solutions architects and database engineers. This team made recommendations as to which AWS services would be best suited for each category of Amazon data being migrated from Oracle, including:

  • Amazon DynamoDB: recommended for critical services requiring high availability, mutating schemas, and consistent, single-digit millisecond latency at scale.
  • Amazon Aurora: recommended for services with stable schemas requiring high availability and strong referential integrity.
  • Amazon Simple Storage Service (Amazon S3): recommended for inexpensive, long term storage of relational and non-relational data.
  • Amazon Relational Database Service (Amazon RDS) for PostgreSQL or MySQL: recommended for non-critical services and ease of set up, operation, and scaling.
  • AWS Database Migration Service (AWS DMS): helps migrate databases to AWS quickly and securely. The source database remains fully operational during the migration, minimizing downtime to applications that rely on the database. DMS can migrate data to and from most widely used commercial and open-source databases.
  • AWS Schema Conversion Tool (AWS SCT): makes heterogeneous database migrations predictable by automatically converting the source database schema and a majority of the database code objects, including views, stored procedures, and functions, to a format compatible with the target database.

This team also provided formal instruction about specific AWS services, ran hands-on labs, offered one-on-one consultations and coordinated direct assistance by AWS product teams for Amazon businesses experiencing specific challenges.

"Having this central team staffed with experienced solutions architects and database engineers was crucial to the project's success," says Park. “The team not only helped educate Amazon business teams but provided feedback and feature requests that made AWS services even stronger for all customers."

Amazon also thought carefully about how best to help its Oracle database administrators transition onto the new career paths now open to them. One option was to help them gain the skills necessary to become AWS solutions architects. Another was a managerial role in which an Oracle background would be helpful during the ongoing process of bridging traditional Oracle-based environments and AWS Cloud environments.

Database Freedom on AWS

Migrating to AWS has cut Amazon's annual database operating costs by more than half, despite having provisioned higher capacity after the move. Database-administration and hardware-management overhead have been greatly reduced, and cost allocation across teams is much simpler than before. Most of the services that were replatformed to Amazon DynamoDB—typically the most critical services, requiring high availability and single-digit millisecond latency at scale—saw a 40-percent reduction in latency, despite now handling twice the volume of transactions. During the migration, service teams also took the opportunity to further stabilize services, eliminate technical debt, and fully document all code and dependencies.

Reflecting on the scope of the project—a migration that affected 800 services, thousands of microservices, tens of thousands of employees, and millions of customers, and that resulted in an AWS database footprint for Amazon larger than for 90 percent of its fellow AWS customers—Amazon.com sees a lesson for other large enterprises contemplating a similar move.

"No one involved with this migration project would say it was simple, easy, or fun, but it didn't take superpowers, either. If a company like Amazon can move so many decentralized, globally distributed databases from Oracle to AWS, it's really within the reach of almost any enterprise."


Learn More

Learn more about Amazon DynamoDB.