Answer by Nauman Noor:
I don’t work for Gartner nor am affiliated with them in any manner. I do feel though that directionally the statement is quite believable once you consider three key aspects:
1. What is being CMO spend being compared against
If you were to review the spend in most IT organizations, a good 60-70% (if not a bit more) is dedicated to keeping things running (e.g., data center operations, electricity, leases on infrastructure, etc.). Another 10% - 15% is typically dedicated to bug fixes, maintenance and general support.
That leaves 15-30% of the budget on providing new capabilities - discretionary spend in some ways. That then has to be divvied up across the various business areas and ofcourse, some at making IT function better.
Gartner’s projection is focused on purchase of new technologies and services by CMOs. In terms of an apples to apples comparison, it would be fair to compare and contrast this against discretionary spend.
2. What is included in the CMO technology spend
Gartner’s classification has more than just servers and analytics when it comes to CMOs and technology. It includes SEO tools, social media platforms, e-commerce and a variety of other things (see: ). They are predicting that with the adoption of SaaS, CMOs will increasingly engage the providers directly versus relying on their inhouse IT counterparts. For instance, one can leverage Amazon for hosting the ecommerce site and fulfillment of orders as well.
3. Overall industry or segment specific
Spending on IT does vary by industry, just as marketing spending does as well. So there will be sectors / segments where marketing spend is nominal (e.g., OEM manufacturers) while others will spend less on IT (e.g., retailers, restaurants, food services). I believe that the projections are at an aggregate level.
As with any projection, the variance is huge and the key point to note is that CMOs will be increasingly engaged when it comes to use of technology within the enterprise. Over time, it will be viewed less as something that would be left to the CIO to be determined.
TCP/IP socket starvation may have caused NASDAQ outage... -
It appears that the outage was caused by NASDAQ’s SIP (Securities Information Processor) going berserk, which resulted in trading being shut down for a good portion of the afternoon that day.
The article points out that there were connectivity issues from Arca [to NASDAQ] which may have been a precursor to the malfunction occurring. As the good folks at Nanex point out, it could have been a simple case of TCP/IP socket starvation of the primary SIP instance that cascaded into a mess forcing a shutdown…
In a nutshell:
It is an interesting failure mode, which may have been avoided had there been a realistic performance test. Admittedly, the conditions that led to it are hard to replicate, though it points out a single point of failure when it comes to using the same networking stack to fulfill two tasks (primary communication and heartbeat monitoring).
When designing heartbeat monitoring for mission critical / continuously available systems, it is a good design practice to have multiple paths for heartbeats to ensure system uptime.
This can include use of point to point Ethernet connectivity (with UDP and persistent TCP connections as checking mechanisms), serial connections (for short distances) and/or use of common data blocks on shared storage device. This way, should one path fail, alerts can be triggered informing the NOC of potential issue needing investigation.
Another good practice is to address potential loss of reliability. It could be due to a stale version of the application running (while other instances are using a newer release), application malfunction (as was the case at NASDAQ) or malicious activity (e.g., via malware).
Hence, once a server has been declared to be unavailable or more accurately, unable to function as designed, it should be removed from the production environment. Through automation and use of scripts, one can take production facing network access offline (via load balancer, router, firewall, etc.), begin forced server shutdown and/or simply kill the primary application(s) that are hosted.
It appears that the IT team at NASDAQ would benefit from a lesson in designing continuously available applications and perhaps a visit at organizations that have to bake key non-functional requirements into their production systems (thinking the likes of Google, Amazon and perhaps their counterparts at other exchanges like NYSE).
Hopefully, they don’t try to learn ‘on the job’…
NEW YORK/WASHINGTON (Reuters) - Regulators are questioning how robust Nasdaq OMX Group’s systems are after last week’s massive trading outage, while shrugging off a spat with NYSE Euronext as a distraction,
Additional Declassified Documents Relating to Section 702 of FISA
August 21, 2013
December 8, 2011 — Lisa Monaco, John C. (“Chris”) Chris Inglis, Robert Litt - Statement for the Record before the House Permanent Select Committee on Intelligence
February 9, 2012 — Lisa Monaco, John C. (“Chris”) Inglis, Robert Litt - Statement for the Record before the House Permanent Select Committee on Intelligence
NSA has an official Tumblr blog, which it is using to disclose recently declassified documents. Guess they know their target audience and are with the times. #fisa #NSA
Answer by Nauman Noor:
All the other answers are on point. The gist of it is that Paul and team are able to identify the traits in founders that make start-ups successful, able to augment their teams with managerial talent, provide coaching and then leverage their ecosystem.
Nothing unusual at a high level from all the other incubators, though the difference is in execution. If you have time, I would highly recommend reading Paul Graham’s essays at on site ( ).
Their focus continues to be on execution and continuous coaching - most people pay lip service to these principles though in practice fall short.
Pretty good approach to how many people are watching a popular Youtube video at a given time…
Answer by Ryan Hardy:
Gangnam Style reached reached 500 million views on October 19th, 2012 and reached 1 billion views 63 days later on December 21, 2012. A change of 500 million views over 63 days implies that the video is being viewed an average of 92 times per second in that period. Given that the video is 4:13 long and assuming that 1 view equals 1 person, this means that, on average, about 23,000 people worldwide are currently mesmerized by PSYs dance moves. This is comparable to the population of a small town or the attendance at a very large indoor concert.
Here is the calculation on Wolfram Alpha
This is an average, but how much does this vary during the day? I warn you that the following is extremely preliminary and somewhat lazily done by my usual standards.
I extracted the data from this image to generate an approximate distribution of the world’s population by longitude using edge detection and a handy data extraction tool for Google Chrome ( ). I’m too lazy to go and find geospatial population data.
After duplicating the histogram and normalizing, I then calculated the fraction of the world’s population in a moving 12 and 16 hour range of longitudes to determine how many people are awake and how many people are in daylight.
Here’s what it looks like.The way to read it is to say that at approximate local noon in the time zone on the x-axis, y of the world’s population is either awake or in daylight. I’ll admit that determining the proper amount to shift these plots horizontally is confusing, so this plot might change in future edits. I assume people wake up at 8 am worldwide and go to bed at midnight, though this varies considerably around the world.
Regardless of the proper shifts, approximate univariate statistics can be derived from these data. The average The standard deviations of both quantities is 0.04. Awake fraction and daylight fraction can change by 0.17 and 0.23, respectively in the course of the day. The means of these are 0.51 and 0.67. The 2.5th and 97.5th percentile awake fractions are 0.6 and 0.75.
I therefore predict hourly Gangnam Style views have a standard deviation of 6%, or +/-1,400 views and that 95% of the time, the number of people watching is between 20,600 and 25,800 people.
If a design, particularly a team design, is to have conceptual integrity, one should name the scarce resource explicitly, track it publicly, control it firmly — Fred Brooks
“In Mexico City, planners turn vacant space under freeways into places to work, dine, play
Nick Miroff. May 29, 2013
Mexico City — You can’t get something out of nothing. This is common sense, not to mention a principle of physics and mathematics.
Yet the amazing science of Mexico City’s real estate development obeys no such laws.
Urban planners here, in one of the world’s most populous and crowded cities, have found a way to add thousands of square feet of new commercial and recreational space. And it isn’t costing local government a cent.
Their gambit is called Under Bridges (“Bajo Puentes”), and it’s a simple idea: Convert the vacant, trash-strewn lots beneath Mexico City’s overpasses and freeways into shopping plazas, public playgrounds and outdoor cafes.”
Photo: Dominic Bracco II / Prime - A man rests on one of the new park benches in one of Mexico City overpass developments on May 27.
via massurban & Washington Post
Thought this is a rather interesting use of what is considered to be dead urban space. Probably reduces crime intensity and helps with maintaining cleanliness.
Visual on what makes Great Britain, United Kingdom and British Isles
The difference between UK, Britain and the British Isles
Source: Ordnance Survey Blog
Critical system downtime, upgrades and planning for continuous availability
As this article notes, maintaining system uptime is not as easy as merely upgrading the underlying platform. Actually, IBM’s mainframes are still leading edge when it comes to ensuring transaction integrity coupled with high reliability (which is often confused with availability).
Though most management consultants would recommend replacing a mainframe environment with an x86 platform coupled with the likes of VMware, there is a fair amount of engineering that IBM provides which is taken for granted and not replicated when swapping just the infrastructure.
As the article notes, it is interesting to hear that bank executives believe that a hardware upgrade will solve uptime and availability issues. Reality is a bit more complex - in the case of RBS, it appears that there is a knowledge gap in understanding the current landscape which then led to broken operational processes, culminating in a four day [emphasis added] outage of all core bank systems.
A hardware upgrade will add more computational capacity which is redundant in addressing the core issues - and may in fact exacerbate system reliability. To migrate onto the new hardware, one would need to have a solid understanding of current processes which has clearly been lacking. Thus the migration would introduce more failure points and potentially defects, resulting in an elevated risk of system outages in the future.
Nonetheless, the upgrade is costly (~$700M) which adds to the perception of serious investment being made in addressing the issue. The only ones benefiting from this are the consultants and potentially the hardware and software sales teams.
If you fail, don’t associated yourself with that failure. It’s an event, it’s not who you are. — Great advice by Jason Sosa, founder of IMRSV, one of Time.com’s 10 startups to watch in 2013, shares his thoughts on perseverance in the face of challenges. (via fastcompany)