The article is a good reflection on some of the current press around the impact of Google search on the ability for people to recall. Most people are equating that to intellect. I would disagree by stating that it is just a component of it.
In fact, as information becomes readily available and accessible (i.e., author talks about looking up trivia facts using his smartphone), the challenge will be discerning the authoritative sources, the relevant information and then the actual synthesis.
For let’s talk about distilling the sources. Apart from historical facts that one can validate from multiple sources, more commercial and valuable insights are more difficult to vet. In fact the more valuable the information nugget, the propensity of uniqueness is high enough that one has to indirectly vouch for its validity (e.g., validate the source versus the fact). One may argue that there is a premium associated with that. Ironically, in an age where raw data is plentiful, actual nuggets of insight will become more valuable and fewer in between.
Speaking of relevance, to determine what is worth the premium, it has become become incumbent on organizations and thus people to think evermore critically. In fact, the neurons freed up from the need to memorize factoids need to be put to use on more creative tasks. The pace of change is accelerating and the feedback loop for situational awareness is shortening rapidly. The challenge is that learning institutions have yet to consider the shifts that technology is imposing on society. Curricula focused on rote have to be revised to focus more on logic and critical thinking. In addition, adoption of technology has to go beyond the delivery mechanism that is being embraced. Kids and for that matter adults have to be taught in a manner where ready information is leveraged to gain higher level understanding and analysis.
The last point is around the synthesis. As is oft said, ‘practice makes perfect’. There has been a fair amount of research and discussion on this topic (ergo, ‘Outliers’) and it is important to note that there is a notable lag as society adopts new concepts. Having said that, most don’t appreciate that the period can span a generation or two, and in the interim state, there is an inherent advantage to those on ahead of the curve.
Society is truly at the start of some radical transformation - the impact of how humans learn and interact are just beginning to feel the impact of technology…
Interesting article on how the payments to British police made by News Corp would be in effect violation of FCPA statutes. As the article reminds the readers, the Act generally prohibits American companies and citizens from corruptly paying – or offering to pay – foreign officials to obtain or retain business.
Though the connection between the heir apparent (which is becoming less likely every day) James Murdoch and the actual payments is tenuous at best, any investigation would be pretty severe in terms of the publicity. It would also impact the recent bid to buyout the remainder of Sky BsB by the holding company.
It is going to be interesting if the DoJ delays initiation of an investigation given the actions noted to date. It is going to be interesting how the Murdoch properties in the US provide coverage of the ongoing trials and tribulations. And summers in Europe are idyllic? Pssh…
HP’s tablet launch finally demonstrates the monetization of its webOS investment in a public manner. That is great given that Android and iOS are already battling it out and Microsoft is thinking of how to get Windows 8 to work in this segment (tablet PCs don’t count). And oh, let us not forget RIM’s Playbook though it is perhaps further behind in terms of gaining market share than all the others.
At this stage, it is fair to say that dominant players are Apple and Google though as the number of tablet (subset of mobile) OSes increases, it poses some challenges to organizations seeking to provide their customers a consistent and exemplary user experience regardless of the platform.
So how would a large organization support a rich user experience on all these mobile platforms through a combination of mobile applications and mobile web sites? Some quick thoughts come to mind:
For content heavy interactions, such as informational / marketing content (e.g. product information), it is probably safe to use a mobile platform development tool, which then ports (aka ‘compiles’) to a platform specific binary. Though Adobe’s flash has gotten a bad rap in some areas, with Flash and Flex Builder, one can support multiple apps with fair ease. It also helps that their products tend to be prevalent in creative circles who tend to author such apps.
For business productivity applications (e.g., supporting field personnel or employees), though mobile application development suites (such as PhoneGap) can provide a level of standardization across the platforms, there is no panacea which provides for ‘write once, use anywhere’. Just as Java struggled with supporting native capabilities spanning the platforms, organizations should plan ahead to make sure that the appropriate capabilities are tuned to the platform specific nuances such as UI, app integrity validation and payment integration.
For the time being for complex applications, organizations would need to support two platforms natively, with more generic capabilities being managed through application suite rendered run-times or by embedded HTML5 code.
Until the emergence of more consistent HTML5 support, the adoption curve by developers for new entrants (such as Windows 8 and webOS) is going to be pretty challenging. This will limit adoption in the corporate arena along the peripheral edges where timely access to rich content is a high priority and limited transactional support is needed.
July 1 2011 is when the new law comes into force. Though time will tell how successful this though a strong influence is going to be the enforcement side of the equation. The pressure is on SFO (Serious Fraud Office) to demonstrate its willingness to actively pursue the initial cases to demonstrate the seriousness by which it is going to pursue violators.
The next few months are going to be interesting indeed…
Is Dynamic DNS transitory technology for Data Centers?
Back when the NASDAQ was flirting with 5000 and the bull market seemed to know no bounds, there was an innovative company called Palm that had put forth the notion of downloading web content as 'Web Clippings'. This was hailed as quite the revolution as it would open the door for web applications to be accessible by hand helds which in turn would synchronize with the ‘Internet’ once docked in a cradle hanging of a PC.
Well, a decade later, the ubiquity of data networks and open standards such as WAP put an end to that notion. If there was a transitory technology it was that - it filled a gap for the few years that handhelds lacked native IP connectivity and the carriers were too busy protecting their voice networks to see over the horizon.
Similarly, there is a potential that the new innovations in network engineering may diminish the necessity of dynamic DNS and large scale IP-MAC mapping management. When virtualization first arrived on the scene (circa ~2003ish), it became soon apparent that to overcome the IP changes that virtual machine movement requires, DNS infrastructure needs to be resilient, readily scalable and one that allows for management enmass of thousands of hosts.
That spawned off a growth in appliances to address the need for data centers. Some of the notable vendors are Bluecat Networks and Infoblox amongst limited functionality being part of product lines from F5 Networks and Avaya (amongst others). This is becoming a necessity as IT operations seek to address smaller RTO windows for critical applications by automating the migration of workload across data centers. Of course, most network engineers would note that such an approach is a kludge at best of times and more of a hack at others. There are issues around TTL and having the DNS space stabilize after transference, requiring a level of tolerance of the applications while ensuring that the OS layers is optimized for such an approach.
Now with the adoption of IPv6 (more of a stimulus of change) and flatter network topologies through next generation protocols such as QFabrics, FabricPath etc., it is a matter of time whether there would be a need for dynamic DNS to the extent today. Though there will be certain use cases that would need it, the days of dedicated infrastructure to support it would wane in large shops. It would be easier to have the large network vendors (ergo Cisco, HP and Juniper) built it into their management tools rather than having a bolt on solution.
It would not be surprising to see, barring the fallout of the IP litigation between Bluecoat and Infoblox, that specialists are acquired by the network vendors. At the right price, it would provide them a foothold in IT shops that are transitioning to next generation networks over the next 5 years.