Oracle announces new database improvement, server, and backup appliance

Larry Ellison kicked off Oracle OpenWorld with his annual keynote, and we covered it with a live blog earlier tonight. Noriyuko Toyoki, SVP of Fujitsu, started things off for the first hour, and the crowd was unimpressed. The energy levels in the arena increased once Ellison took the stage as he made the company's product announcements.

In-Memory Option for Databases:

The first announcement was the most important: In Memory Option for databases. While initially we thought this was simply pinning tables in memory, something that's been possible for years, it turns out that isn't the case. According to Ellison, this new method not only runs things in memory, but also stores the data as both a row and a column. Transactional statements run faster on row-based data, whereas analytics run faster on column-based data. While it's a little confusing how that actually increases performance, Oracle has some magic sauce that provides the following performance benefits:

  • 100x times faster queries
  • 3-4x faster inserts
  • Fully compatible with all existing apps; no changes needed

To implement this new feature in Oracle 12c, the DBA needs to simply change a parameter, run a command, and drop the indexes. Ellison: Just throw a switch and turn on in-memory database. EVERYTHING runs faster without a SINGLE change to the app.Everything else is supposed to be completely transparent and according to Ellison, "Just throw a switch and turn on in-memory database. EVERYTHING runs faster without a SINGLE change to the app." They showed an on-stage demo where they sorted through three billion rows of Wikipedia searches. The first demo, a standard database with indexes, processed the three billion rows in roughly 1.5 seconds. Removing the indexes resulted in a query that ran for several minutes. Turning the in-memory option on processed over seven billion rows a second.

M6-32 Server:

Oracle also announced the new M6-32 server, the "Big Memory Machine." This server contains 32 M6 SPARC processors, each with 12 cores, 96 threads, and 3TB/sec bandwidth. Compared to IBM's P795, this machine has twice the memory capacity, 50% more cores and twice the bandwidth at roughly 1/3 of the cost. However, Oracle licensing costs were not discussed. This was the machine that Oracle was running their Wikipedia test on, and it's available now, although pricing was not mentioned and is not available on the website.

Oracle Database Backup, Logging, Recovery Appliance

This has to be the device with the worst name. Ellison: I came up with the name, that's why I make the big bucks. iPhone? Pfft, that's a boring name."Larry Ellison quipped that it was a terrible name but that he named it and that's why he makes the big bucks. He also joked that the iPhone is a boring name in comparison which drew a laugh from the crowd. The appliance itself is designed to backup thousands of databases. Whereas other appliances backup at the file level, the Oracle Database Backup, Logging, Recovery Appliance (ODBLRA?) backs up at the transaction layer. While Ellison didn't dig into the deep technical details, he explained that transaction logs are sent to the appliance and can be done in the same datacenter or across a WAN to a remote one. Two appliances can also be setup to replicate between themselves.

Datacenter of the Future

Ellison ended the keynote by discussing the datacenter of the future. He noted that everyone loves buying the cheap two socket Intel servers, installing virtualized Linux on them, and connecting them via wired Ethernet. He stated that they're cheap, but are not good for every workload, emphasizing that Oracle's engineered systems are still very important. We found this to be an ironic statement, considering the Exadata platform currently runs on Intel Linux servers, albeit with Oracle's Flash Cache technology and special software.

Report a problem with article
Previous Story

New Apple TV software update pulled after Friday release

Next Story

Hacker group claims to have used fake fingerprints to defeat iPhone 5s Touch ID

6 Comments

Commenting is disabled on this article.

The Oracle Database Backup, Logging and Recovery appliance sounds like an expensive Data Guard Standby Database...

Ugh, I hate how hardware-centric Oracle is. The section "Datacenter of the Future" about how he ended the keynote is no coincidence.

I much prefer PostgreSQL over Oracle, at least for the GIS oriented work we do. Seems much less dependent on performance tuning expertise to get decent performance, almost as if there's some greater overhead cost to running the Oracle databases themselves.

Anyway -- from my experience, at worst, PostgreSQL will give you similar performance to Oracle with the same hardware. At the pleasant cost of $0, where you can spend the savings on higher performing servers.

Edited by Northgrove, Sep 23 2013, 11:38am :

The hardware is what makes it happen. PostgreSQL has no magic that can pull performance from the air. It also no scalability so isn't really an option for big applications. Even on our small stuff, the PostgreSQL installs fall flat on a relatively small number of users while the Oracle DBMS with only 2 CPUs can handle hundreds to thousands of parallel queries.

I tested oracle 'secure linux' (just rebranded red hat) with their express oracle DB thing a few years ago on a dual CPU P3 and was very shocked to see how incredibly slow it was. I didn't even get to the point of running queries because it was too darn slow, wiped the whole thing and setup a LAMP on it and mysql ran breezily fast.
I know places just picked oracle because it's enterprise and has a huge price tag which must 'reak of quality' but I bet if half the places using oracle used alternatives, they'd switch, save a lot of money by doing say, and not notice any performance degradation.