IBM agreed in the settlement not to pay the monetary penalties from any insurance policy. Perhaps it is a tacit acknowledgement of new policies offered by Chartis and others that indemnify organizations against settlement and legal costs in such matters.
Though IBM had anti-corruption policies in place, according to SEC, poor controls allowed the employees to use local business and travel agencies as a conduit for bribes. The agency also alleged that IBM recorded the bribes as legitimate business expenses, which makes one wonder if there would be back payment of taxes.
Ever Increasing Expectations for Corporate Anti-Bribery / Anti-Corruption Programs
This article in The Lawyer is a great read. Shell has learnt from its past experiences on the need to conduct due diligence with third parties and this has resulting in what appears to be a leading anti-bribery and anti-corruption program.
By virtue of its corporate mandate (i.e., natural resources), Shell has to do business in states where ethical business standards may deemed less than stellar and has a supply chain that consists of tens of thousands of vendors.
So it seems prudential that Shell is expanding its due diligence activities to include third parties such as law firms that are involved in joint ventures. Undoubtedly, it probably is mitigating some of the risks that its competitor, BP, encountered in Russia.
As the scope of what constitutes a third party by the British Bribery Act Section 8 is ambiguous (‘those who perform services’), large companies would need to leverage risk based approach and potentially technology to ensure the appropriate level of diligence.
In terms of technology helping in the process, some considerations include:
Third-Party Anti-Bribery Program Questionnaire: Use self-service portals for third parties to submit details on their Anti-Bribery programs and for independent assessors submit their examination results. For high risk situations, this may include use of outside counsel ensuring that the entities have sound and viable programs.
Federated Search: Leveraging enterprise class search engines to trawl enterprise disparate enterprise data assets to glean any prior information with a third party. This would be a preemptive measure should the company in question be subject to a discovery action.
Integrated Case Management Tools: Large organizations may need to use case management tools to facilitate investigations across multiple locations and geographies. In most cases, the internal group such as a FIU (Financial Investigative Unit) would use this to capture all the work products and outcomes of such investigations. Integration with systems storing third party information as well as financial GL systems facilitates in tagging relevant transactions, while allowing easy application of legal holds.
Transaction Monitoring Systems: Though traditionally used by financial institutions to monitor credit and debit card transactions for sign of fraud or money laundering, these may be just as applicable to organizations employing hundreds of thousands of employees. With the appropriate rules and filters, these systems can automatically sift through disbursements to filter anomalies for further scrutiny. Furthermore, this can be a part of a leading edge program, demonstrating senior leadership’s intent of impeding bribery (and for that matter, facilitation) behavior.
In addition to the above considerations, organizations committed to ethical behavior such as Apple will augment such programs with in person visits / audits to vendor sites and critical evaluation of their employee /worker relationships.
This must be somewhat embarrassing for what is generally deemed to be the premier consulting firm when it comes to matters around FCPA and in general AML program design.
Probably more of a case of the right hand not know what the left is doing in the sprawling complex of sister companies. Perhaps best to implement tighter quality control around such initiatives given high visibility and thus, reputational risk.
With automobile manufacturers seeking to overhaul their business models (e.g., GM offering OnStar as a separate product) and new offerings such as telematics, the traditional electronic control units’ functions are being externalized and integrated in ways never imagined.
The traditional ‘car’ or consumer vehicle is retracing the path that software on mainframes did about three decades ago. As the ‘software’ (i.e., consumer UX, cellular connectivity, integration with service providers) becomes decoupled from the hardware (i.e., the core ECU, engine, transmission, mechanical components), some key points come to mind:
API Design: Those who have been working in the embedded software space know this all too well. API design is perhaps the most difficult job there is in software engineering. Often enough, it is forgotten that the half-life for such software is measured in decades, and the design decisions of today will impact the evolution of the platform. In essence, extensibility and flexibility have to accommodate potential features that are unknown at the time of design. It is rarely done right the first time, and some are quicker learners than others.
Security: Yes, security is a big topic these days. With everyone, from individuals, malicious groups, companies and nations being concerned about information security on the Internet. But what if someone can break into your car through the cellular / internet connection and steal it. What, a hacker commit grand theft auto? Yes, and it has been demonstrated by academics how easy it is to do. So can you expect your insurance premiums on say a new 2014 (it not sooner) car to go up due to a higher likelihood of theft?
Privacy: So it is one thing to consent to your mileage and driving pattern being noted for telematics. And it is quite another for the information to be subpoenaed in a civil suit (e.g., marital divorce). In addition, there is the concern that the authorities would have access to that information, without necessarily requiring a specific subpoena, if the courts interpret the data stream transmission as any cellular stream. Though the jury is out on this one due to lack of case history, some of these concerns will arise once public understanding increases. Are manufacturers going to provide a physical option to disable geo-positioning, say as a physical switch? Very Likely.
Degree of Platform Openness: In reality, not all the features and capabilities would be exposed in this model. Leveraging the analogy of the browser wars (IE versus Netscape), would the manufacturer favor a strategic partner (e.g., Ford’s alliance with Microsoft for Sync) and expose a minimal set of features via APIs? Or conversely, provide a tiered model, where some partners would have to pay a premium to access some functions (e.g., repair shops requesting real-time remote diagnostic information to predict remaining service life of a key component)? Vehicle manufacturers have to think hard about this one - the default answer of preserving the current status quo of their vendor ecosystem is perhaps not the most optimal one.
Richness of Data Exchange: As more and more services leverage the data stream in a more real-time fashion, the architectural ramifications of the end to end solution are great! In essence, who will own the new ‘data foundries’ in the value chain? Is there an opportunity for someone to collect the data and sell key insights to other parties? If so, are the vehicle manufacturers the one that best positioned? the platform developers?
In essence, if an open platform for vehicles were to materialize, there are tremendous opportunities for the current players in the value chain, as well as for new entrants to emerge by providing services that would have been a dream just a few years ago.
The evolution of transportation as we know it has just begun….
All these revelations do not bode well for McKinsey’s reputation. Two former partners being noted discussing with a hedge fund. Now talk of yet another consultant funneling information to the same hedge fund.
If the senior leadership is busy ‘making’ money on the side as they ‘are not compensated well for their great ideas’, it makes one wonder how the rest of the firm is viewed…
Expect the current partners at ‘The Firm’ to be visiting their clients to assure them that it was a one off incident.
Curious to see how this turns out for them - they dodged the bullet on Enron. Expect to see some great moves as they attempt to evade this one.
If there is a phrase that is on the hype trajectory these days (a bit like HTML5 is to software developers), it is the notion of Big Data. So what is Big Data, really? It is notion of having to analyze and sift through petabytes of data to gain some competitive edge.
The reality is that in most enterprises, the core financial systems are still dealing with data volumes in the gigabytes range. So where is all this additional information coming from? How can a company have petabytes of data to sift through? Are we all going the route of Google, which apparently processes 20 petabytes daily?
In short, Big Data is driven by two primary drivers; “computer generated data" and incorporation of "unstructured" data.
Computer (aka machine) generated data has included data such as system logs, measurement devices etc. Though lately, innovative technologies such as RFID, location information from mobile devices have created a preponderance of data that organization wish to keep and discern trends. For instance, it is estimated that ~30 million intelligent meters (part of the another much touted ‘smart grid’ concept) can generate up to 40TB of data in a 90 days time period. As the volume and retention periods extend, it is easy to see the volumes of data rising very rapidly. In fact, some utility companies are leveraging data warehouse technologies to store the raw smart meter data prior to detailed analysis.
The emergence of social web and in general, web 2.0, feeds into the organizational desires to perform sentiment analysis in additional to the now commonplace textual and voice / speech analytics. This in itself is can result in large data volumes (as exemplified by Google) and has lead to emergence of new tools such as Hadoop. The current focus is how best to unify the process of analysis across unstructured and structured data in a manner that is seamless and inconspicuous to the overall discovery.
So to answer the question posed at the onset, ‘Yes, you should care about the data volumes increasing exponentially’. The caveat is that storage of the data itself is least of your worries. Rather more importantly, the overarching approach to analysis and decision making needs to be fundamentally examined in the context of such large data volumes. This is where most organizations will struggle…
Teradata solidifies lead with Aster Data acquisition
The pace of consolidation continues in the data warehousing space. Teradata acquires Aster Data, which is best known for their ability to support analytics on unstructured data by leveraging a user friendly version of MapReduce.
This a great way for Teradata to add horizontal capabilities in this space, while leveraging the platform to strengthen a weakness in its current offerings - Cloud based analytics.
This is just two months or so after it announced Aprimo acquisition. Seems that the red shift effect is occurring in this segment of the market. Makes me wonder if HP was looking at Aster Data as a potential target to compliment its Vertica acquisition.
It seems that the UK government realizes that ‘facilitation payments’ are here to stay, though in reality, it can be difficult to differentiate these from bribes. This is a grey area as the dimension of time can impact the desired outcome. So what may seem to be a ‘facilitation payment’ can be interpreted as a bribe, if the quick turnaround circumvents a competitor’s efforts.
For those not familiar with the term, facilitation payments are those under which an official is paid to perform a function he or she should normally carry out. Sad to say, it seems that in some emerging markets this is deemed normal practice.
The mantra last decade has been to consolidate IT infrastructure by reducing number of data centers, emptying ‘IT closets’ in regional offices and by relocating ‘server under my desk’ across business units. Ironically, given that data centers are tilting at full capacity and the organizations’ CFO adverse to large capital outlays to extend / build new ones, the ‘cloud’ option is becoming more attractive when deploying new capabilities.
One of the challenges for large companies is managing the assets outside of the traditional periphery of their now rationalized data center footprint. Current vendor offerings from Symantec, CA and IBM are focused on addressing an infrastructure stack that is assumed to be operating in a few centralized locations and composed of (for the most part) homogeneous components. By bringing in a federated, service provider based cloud environment into the mix, some new challenges arise:
1. Provisioning ‘capacity’ in general of computing resources (i.e., computational, memory and storage)
Though there are vendor offerings such as VMware, most large shops have restored to scripting and custom development to integrate these along with their current investment in mainframe, UNIX big iron etc. into a true ‘enterprise’ level dashboard. This results in an overhead and reduced agility that some of the point solutions were meant to address in the first place. As virtualization gains traction, new problems around retirement of VMs no longer needed are arising.
2. Operational Monitoring for Performance and Uptime
An even bigger challenge is ensuring that the individual machines (virtual or not) are available from an end to end business process perspective. This implies that there is an understanding of which resources are necessary for a business process (and the expected level of minimal performance) as well as the dependencies across the infrastructure stack underneath. For those who have been on the ITIL journey, they recognize the challenges in terms of keeping dependencies current. Furthermore, tools today do not span across the infrastructure components, requiring one to use a tool to monitor virtual machines, one for networks and yet another one for storage.
An even bigger challenge is root cause analysis. This is reliant on event correlation across the tiers. It requires for uniform logging conventions to be in place, standardize time management and finally aggregation of logs in one place to conduct correlation analysis. Most cloud service providers don’t allow their clients to access the logs in a manner that makes this remotely possible. It is even rarer to have the log output to be in a format that is readily consumable without some major transformation and inference to be applied.
3. Information Security Incident Management
The posture of the security perimeter can vary across the vendors. In addition, the variability in logging and information integration (such as access to non critical events from the cloud providers’ IDS systems) can make qualification of potential breaches difficult and perhaps after the fact.
In addition, one needs to consider the applicability of computer forensics when the scope includes equipment from service providers. Though the organization may have rights to the virtual instances, the underlying physical server may be a different story. Should a case require access to the underlying physical server (at the time of the incident), would the service provider be able to identify the physical instance on which the VM was hosted on the time and be able to provide a snapshot in lieu of a litigation hold? These questions are best answered when one is considering a federated cloud environment rather in the midst of an incident unfolding…
4. Capacity Planning
This is challenging at the best of times in the traditional paradigm of physical, dedicated hardware. Considering virtualization now extends to networks and storage in addition to server centric assets, there is a need to revisit best practices for capacity planning.
Often when determining the level of resources necessary to support current trends, one does not generally account the limitations of throttling and allocation in the current tools. For instance, server virtualization products such as VMware and Microsoft’s Hypervisor are challenged to provide a controlled IO profile to match the desired workload. They may be able to throttle IO requests across the VMs on a physical instance, but they cannot resolve the inherent conflict of multiple physical servers competing for IO on a shared SAN. To alleviate such bottle necks, capacity planning requires an experienced hand to design appropriate storage blocks. These from an outsider’s perspective may be seen as a waste, though it is a necessity to ensuring that the desired performance is not constrained.
In a cloud environment, such transparency is not available, let alone having influence on the service provider’s capacity planning. As such, applications with high performance expectations on atypical dimensions may not be suited for migration to a cloud environment at this stage.
Though there has been much ink on integration of applications and business processes spanning across the data centers into the clouds, there is much work that needs to be done to ensure that IT Operations are effective and efficient in such an operating environment.
Probably the most common refrain you here from anyone arguing against the United States agreeing to significant emissions reductions is, “what about China and India?” China is, after all, now the world’s largest total emitter of carbon dioxide emissions.