{"id":245675,"date":"2012-12-20T15:00:49","date_gmt":"2012-12-20T23:00:49","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/?p=245675"},"modified":"2016-07-20T07:32:14","modified_gmt":"2016-07-20T14:32:14","slug":"hekaton-breaks-through","status":"publish","type":"post","link":"https:\/\/www.microsoft.com\/en-us\/research\/blog\/hekaton-breaks-through\/","title":{"rendered":"Hekaton Breaks Through"},"content":{"rendered":"
By Janie Chang<\/em><\/p>\n In an online, on-demand world, the ability to respond quickly to requests for data has become a significant challenge. Take\u00a0bwin (opens in new tab)<\/span><\/a>, for example. In order to attract and retain customers, bwin, the world\u2019s largest regulated online gaming company, must deliver consistently positive user experiences. But the company\u2019s online gaming systems were bottlenecking at about 15,000 requests per second, and adding more hardware was not solving the problem.<\/p>\n When Microsoft\u2019s SQL Server (opens in new tab)<\/span><\/a> team offered bwin an opportunity to test a new, in-memory technology, bwin expected to see its transaction throughput double, perhaps triple. Instead, the first test increased throughput tenfold; by the end of the trial period, tests had scaled to 250,000 transactions per second. Bwin is now running this new, enhanced version of SQL Server in production.<\/p>\n Microsoft first shared information about the technology behind these results in November during the Professional Association for SQL Server Summit (opens in new tab)<\/span><\/a> (PASS Summit 2012), during which the company announced the forthcoming release of Hekaton, its new in-memory technology, developed through a collaborative effort between Microsoft Research and the SQL Server product team. Hekaton is scheduled to ship with the next major release of SQL Server. The announcement\u2019s highlight was a demonstration that showed how SQL Server with Hekaton delivered a 30x performance increase without code changes to existing applications or hardware.<\/p>\n \u201cThere are several in-memory database systems on the market,\u201d says David Lomet (opens in new tab)<\/span><\/a>, principal researcher and manager of the Database Group (opens in new tab)<\/span><\/a> at Microsoft Research Redmond (opens in new tab)<\/span><\/a>, \u201cbut what really sets Hekaton apart is that it will be integrated into SQL Server as part of Microsoft\u2019s suite of xVelocity in-memory technologies currently available in SQL Server 2012. Customers won\u2019t need to buy and manage a separate product.\u201d<\/p>\n Lomet is referring to a strategic decision made during the Hekaton project. Although technically challenging and more expensive to develop, it was by far more preferable from a customer-usability point of view to integrate Hekaton into SQL Server. This approach enables existing applications to run without changes to code or hardware. But integration with SQL Server was feasible only after the project team had achieved its primary goal: to design a fast, main-memory database engine that could run efficiently on machines with hundreds of cores.<\/p>\n Since early 2009, Paul Larson, principal researcher with the Database Group, has been part of the Hekaton main-memory database project, which owes its genesis to Cristian Diaconu, Erik Ismert, Craig Freedman, and Mike Zwilling of the SQL Server team, along with Larson.<\/p>\n \u201cIn traditional models, the assumption is that data lives on disk and is stored on disk pages,\u201d Larson explains. \u201cThis creates a lot of overhead when you try to access records. When data lives totally in memory, we can use much, much simpler data structures. Hekaton\u2019s index data structures and storage structures are optimized on the basis that when a table is declared memory-optimized, all of its records live in memory.\u201d<\/p>\nHekaton Accelerates Transaction Throughput<\/h1>\n
Taking the Optimistic Approach<\/h1>\n