Oracle’s acquisition fills in a gap in its overall storage portfolio. With SUN, it got a very capable NAS solution in ZFS, coupled with dedicated hardware. As well, it inherited a leading tape product line through SUN’s acquisition of Storagetek. However, it preserved the OEM relationship that SUN had with LSI (for low end) and Hitachi (for mid to high) for SAN centered solutions.
For the longest time, people have been echoing the demise of SAN in favor of iSCSI and/or NAS. Well that has not happened. There a couple of strong reasons for that:
Performance and reliability for a SAN based solution is still a bit better than that of NAS. In addition, the inherent nature of FC protocol allows for QoS that is not readily available in TCP/IP over Ethernet. Though iSCSI and FCoIP are starting to make inroads, large organizations are rather conservative when it comes to rolling out new concepts in the most critical part of their infrastructure
Scalability of SAN versus NAS when one considers the breadth and diversity of computing platforms that need to be supported. SAN still has the best integration in terms of the types of platforms supported (from mainframes to latest x86 blade server)
NAS support for concurrency is somewhat limited compared to SAN. Though one can overlay it with special filesystems such as IBM’s GPFS, there is additional cost and administrative overhead. Though there are open source alternatives such as Lustre, these have not broken out of their niche usage profiles (e.g., HPC)
With Pillar, Oracle has closed a gaping hole in its end to end storage vision. And not a moment too late. With HP gobbling up 3Par, Dell doing the same with Compellent and EqualLogic, the specialist vendor market has been thinning out rapidly. Just like 3Par and Compellent, Pillar’s story is centered around thin provisioning and automatic data tiering.This fits well with Oracle’s functionality gap in terms of providing tiered storage for its database platform as BigData gains momentum. Oracle’s strategy has been to have the storage platform automatically tier data blocks across (now SSD), SAS, SATA etc. devices based on IO patterns. This has worked well with ZFS with which the database engine communicates over Infiniband (the ‘Exadata’ solution). For unstructured data, this approach does not work as well, as the data volumes are easily measured in Petabytes and the there is a limit to how much disk one can squeeze into a rack (or two). Furthermore, something like Hadoop implemented at scale leverages 100s of commodity servers accessing pools of shared nothing storage. Such an architecture is much easier to support at an enterprise level using a SAN.
Given that Oracle is late to Hadoop centric solutions, this acquisition makes more sense though it may be happening a bit late. Though as always, one should not underestimate the voracity of its sales force in pre-selling something still on drawing boards in Redwood Shores.
Quantum Computing and System Testing: Twins separated at birth?
There has been some recent press with IBM’s 100th birthday as well as Lockheed’s recent purchase of the supposedly first commercial quantum computer (QC) on what the next frontier in computing looks like. The pundits seem to agree that a viable and practical quantum computing would be the next big step after silicon.
Though there are challenges that need to be overcome, such as the near absolute zero temperatures needed for the quantum processor to operate, innovation over time will lower the costs and usage of such a device. So which areas will benefit the most from such a computing paradigm. Multiple ones, though it seems the cryptography has gotten the most ink and perhaps the breakthrough that Artificial Intelligence has been waiting for.
One area that would benefit immensely would be system testing. For large complex systems such as those in commercial airplanes, the number of permutations and combinations of possible scenarios is overwhelming by today’s standards. Inevitably, there is always a combination that was not tested and as remote as the chance may be, could result in a potentially catastrophic situation occurring.
As complexity increases and with it validation considerations, QC allows a fundamentally different perspective what is a practical case of solving “NP-complete” problems, the problems that are impossible or nearly impossible to calculate on a classical computer. Ironically, this itself poses some challenges in the context of ‘who is checking on the checker’ — as in the validation that the QC processor is functioning as intended.
Is virtualization synonymous with a private cloud?
Vendor hype not withstanding, the notion of seamless provisioning and accessibility of computing across private and public clouds “at will” is far from reality. In fact, one would posit that the current evolution is somewhat similar to the notion of Enron and its bandwidth trading notion towards the end. At one time, there was hype around how movies and related content would be streamed across the pipes to people in their homes. What had prevented this from being a reality, apart from some key networking breakthoughs lacking was the last mile that the telcos controlled. And all they had to do was to make sure that investment would ensure sub-optimal throughput to consumer homes.
For large multi-national corporations who inevitably have a footprint in the UK (like an office in London), the higher expectations set by UK Bribery Act are making their way into their anti-corruption policies set at a world-wide (ergo, enterprise) level. For those crusading for limiting the impact of such payments (facilitation and otherwise) on the politics and more in the the developing world, this is great news indeed.
As the effective date for the act (July 1 2011) comes closer, expect similar filings by other Fortune 500 multinationals. Over the next 6 months, it should not be surprising for their suppliers to be held to the same standard with targeted audits to ensure that there are robust compliance programs in place.
Recently, there have been several announcement (like 5 new offerings in May 2011) from established vendors such as IBM, EMC, NetApp and others around their commitment and support for Hadoop. Ofcourse apart from the software and hardware bundles, one can go to Cloudera to license a supported version and build their own infrastructure around it.
So what is the impact on the current vendor ecosystem? In some ways it is analogous to when Oracle introduced a supported version of linux (then known as ‘Unbreakable Linux’). In a nutshell, it was to strengthen the relationship with the end cusotmer, while maximizing the land grab in terms of the IT real estate. It was not important whose Linux distribution was being used by a client, as long as it was not Windows. That meant more money for Oracle licenses while mitigating the likelihood of SQL Server, Exchange, .NET development tools gaining traction.
Similarly, this seems to be a play by the DW vendors to gain capabilities around transformation and load dimensions of ETL. The extract portion is not as relevant in the BigData paradigm as most of the data sources (aka generators) use proprietary data stores or simply stream raw data into flat files.
This has some rather interesting ramifications for ETL vendors. Independent ones such as Informatica, are launching products that address real-time user requirements (e.g., Ultra Messaging) while those part of a larger product suite (e.g., IBM’s InfoSphere Service Director) are attempting to add value by exposing existing enterprise data stores to more event driven data consumers.
Overall the traditional premium for ETL products is fading and the market place is ripe for consolidation. For current enterprises, given Moore’s Law, most data movement that is facilitated via ETL can be handled through more real-time integration suites. For the boundary conditions and data sets in what is termed ‘Big Data’ (e.g., web clickstreams) Hadoop centric tools will be more cost effective.
As data volumes continue to grow, and decision cycle times shrink, batch approaches such as Hadoop will be replaced by frameworks and architectures that are more ‘real time’. Google already has a ‘real time’ search. How ready is your enterprise for that?
Every escalating demands on Chief Security Officer (CSO)
Over the past few days, breaches at Lockheed Martin, PBS and in South Korea demonstrate the sophistication of cyber attacks against various organizations. The attacks demonstrate the level of sophistication that most organizations would be unable to counteract. Indeed, Lockheed went into lock down mode for the impacted areas, until the security controls were re-baselined to ensure that the attack vector was satisfactorily mitigated.
So what does this mean for organizations seeking effective countermeasures? Perhaps the question should be rephrased: What does a CSO need to do differently? By noting that it is the CSO (aka Chief Security Officer) who needs to spearhead a defense quite radical from what has been the mantra of security organizations just 2 - 3 years ago. Back then there was the CSO, who was typically responsible for physical security (and in financial institutions, investigations pertaining to financial crimes) and the ‘IT security’ person, embodied by the CISO (Chief Information Security Officer).
Indeed, times have changed. Perhaps it takes the Pentagon to state the new paradigm (that government sponsored cyber attacks are akin to ‘act of war’) for the realization to sink in. Sophisticated attacks against the organization require a posture and response perspective in a synchronized manner across physical, digital and legal realms.
To better protect the organization, its people and its assets, CSOs need to consider the following trends:
Integrated teams for Critical Functions: Areas such as Op Centers focused on detecting attacks (Cyber and otherwise) need to be staffed with specialists from IT Security, forensics, investigators (more traditional ex-detective personas) as well as from the physical security response teams. The same would apply to ‘Threat Response Teams’ which may be spun up in case of a major breach occurring. This is to ensure that appropriate countermeasures can be deployed effectively as successful attacks tend to breach across the breadth and depth of controls in place.
Integrated Security Platforms: To support the integrated teams, there needs to be an integrated platform comprising of monitoring, event correlation / detection and reporting tools. Yes, all vendors claim to provide that. In reality, they are not as integrated as purported, necessary coverage requires a mosaic of tools from assorted vendors and they all have their quirks, requiring complicated setup. For the larger enterprises, there are other wrinkles they need to consider such as the impact of inconsistent support for things like ‘leap second’. Finally, the scope of these platforms needs to extend to the physical world (such as sensors in man-traps) to provide a holistically complete view of the landscape against which attacks may be launched.
Internal Software Development Expertise: Still considered an anathema by most security leaders, though it is Achilles’ heel of their organizations. Just as hackers have built sophisticated tools to support their cause, CSOs need to consider retaining full-time software developers to build the necessary infrastructure (as noted earlier), build automated ‘agents’ to validate integrity of code being developed and deployed by their IT counterparts (both internal and third parties) and to build specialist programs to automate repetitive tasks within the areas. For too long, there has been a reliance on security vendors to provide packages applications with most of the integration / development being relegated to the realm of ‘perl scripts that combine this log with that log and filter on this regular expression pattern.
Consider Legal tactics as a Core Tenet of the Security Playbook: As Microsoft demonstrated in its taking down of the spam botnets, legal actions can be effective course of action when other countermeasures have failed. To undertake such actions, there is a need to have robust program around ediscovery, digital forensics and investigative rigor in determining root cause. That requires in-house talent both from a technology, security and legal standpoint. All too often, counsel at more staid organizations believe that outside counsel and consultants are better suited. Perhaps if it is a response to another organization filing suit. By the most attacks are detected (and hopefully countered), the evidence is tainted and enough time has lapsed to make forensics ineffectual.
To summarize, the times are a’changing in terms of the threats (online and otherwise) that organizations are facing. Just as foreign policy has evolved for the US and it allies to address a more faceless, nebulous threat posed by decentralized, nomadic terror cells (typified by AQ), modern CSOs or their proxies have to evolve in a world typified by mercenaries working for elusive entities out of geographies with very different law enforcement aptitudes.