A sample text widget

Etiam pulvinar consectetur dolor sed malesuada. Ut convallis euismod dolor nec pretium. Nunc ut tristique massa.

Nam sodales mi vitae dolor ullamcorper et vulputate enim accumsan. Morbi orci magna, tincidunt vitae molestie nec, molestie at mi. Nulla nulla lorem, suscipit in posuere in, interdum non magna.

Morgan Stanley Mobile Internet Report

For those ready to digest A LOT of information, 424 pages worth of very good data, Morgan Stanley has posted their Mobile Internet Report. There is a condensed 92-page presentation that highlights the key themes from the report. As a minimum you should review the “Setup” material as it is a good overview of the next major computing cycle, the Mobile Internet. In these trying economic times, when global economic activity declined 5%, the Mobile Internet industry grew 20%.

The 5th cycle of computing will transform and grow the emerging ‘social networking’ into ‘enterprise networking’ marketing tools. The key takeaways from the study are:

Material wealth creation / destruction should surpass earlier computing cycles. The mobile Internet cycle, the 5th cycle in 50 years, is just starting. Winners in each cycle often create more market capitalization than in the last. New winners emerge, some incumbents survive – or thrive – while many past winners falter.

The mobile Internet is ramping faster than desktop Internet did, and we believe more users may connect to the Internet via mobile devices than desktop PCs within 5 years.

Five IP-based products / services are growing / converging and providing the underpinnings for dramatic growth in mobile Internet usage – 3G adoption + social networking + video + VoIP + impressive mobile devices.

Apple + Facebook platforms serving to raise the bar for how users connect / communicate – their respective ramps in user and developer engagement may be unprecedented.

Decade-plus Internet usage / monetization ramps for mobile Internet in Japan plus desktop Internet in developed markets provide roadmaps for global ramp and monetization.

Massive mobile data growth is driving transitions for carriers and equipment providers.

Emerging markets have material potential for mobile Internet user growth. Low penetration of fixed-line telephone and already vibrant mobile value-added services mean that for many EM users and SMEs, the Internet will be mobile.

Where will these new applications and business tools emerge? From private and public Cloud Computing platforms that optimize capacity, performance and cost for daily and monthly computing needs.

As with all major computing cycle’s new winners and loser will be defined – “Some Companies Will Likely Win Big (Potentially Very Big) While Many Will Wonder What Just Happened”.

WD enters SAS HDD market, the entry point implications

Earlier this month Western Digital announced the WD S25 10K rpm, 2.5-inch, small form factor drive with 3 Gb/s and 6 Gb/s SAS interfaces, The press release can be found here: WD® ENTERS TRADITIONAL ENTERPRISE HDD MARKET WITH FIRST SAS PRODUCT Sustained sequential performance is claimed to be 128 MB/s with a reliability of 1.6M hour MTBF reliability rating. I’m sure the tier one system vendors will welcome the competition.

The interesting aspect of this announcement is the entry point and the observations of Tom McDorman, who runs WD’s enterprise storage group. In a phone interview with Howard Marks of Network Computing, Western Digital Returns To Enterprise Drives, Tom noted that “15K RPM drive sales are down while sales of I/O oriented drives in total are roughly flat.” The reference to ‘I/O orientated drives’ are for those products that achieve high random small block performance. This is the 15K rpm drive that is short-stoked to mitigate seek latencies. The lower sales of 15K rpm drives are most likely attributed to 1) large memory model capabilities of the successful Nehalem architecture 2) emergence of viable enterprise-class Solid State storage and 3) the 40% greater power dissipation penalty of 15K rpm HDDs over 10K rpm HDDs.

Cooler data is well served by Greener operational profile of 10K rpm drives or slower while hot data is moving to Solid State storage that provides a better latency match for today’s n-way compute platforms.

Take-Away’s from a ‘Caml Trading talk at CMU’

Yaron Minsky of Jane Street Capital provided some good insight into the technology requirements of high-frequency trading in a post on his blog titled Caml Trading talk at CMU. The presentation at CMU, “Experience with Functional Programming on Wall Street”, was given in March of 2009. He covered the technology decisions as applied to Jane Street’s trading software. Go watch the embedded video. Some quick take-aways:

  • Jane Street is on either side of trades at a scale of 1 to 2% of equity trades, 200 to 400 million shares per day, $4-8 Billion dollars in nominal value per day.
  • Small mistakes become quickly magnified and can be very expensive.
  • Business and technological requirements are moving at a blistering speed. Rate of change is opportunity for profit and also for significant loss if correctness is not ensured.
  • It is important to be fast but speed kills. Speed magnifies the impact of incorrect code.
  • Granularity of time use to be the phone, now it is the compute cycle. Specialist are gone, computers rule.
  • Jane Street has moved away from imperative programming languages to OCaml, a functional programming language.
  • Need to process 100’s of 1000’s of transactions and react on the order of a millisecond. (RSP – Latency and jitter can eliminate the trading advantage of a proprietary algorithm and introduce error into the high-frequency process.)
  • In parallel there is a need to build data store for Business Intelligence processes that run continuously.
  • Process roughly a Terabyte of data a day. Not earth shattering as this is ~116 MB/s averaged over a 24 hour day. On a 6.5 hour basis this is ~427 MB/s.
  • Storing data is not a core competency. Value is in trading algorithms and the processing of data for intelligence. Cooler data is of lesser value. (RSP – Store off-premise?)
  • Hardware is cheap, key system level IT competencies are not.
  • Key challenge is too effectively process a TB of data per day. This consumes human resources that should be focused on developing new products.
  • Business limitation is ability to execute at blistering speed. Organization is limited by access to smart, reasonable people with specific core competencies.
  • Technology requirements – Correctness, Agility and Performance

It is interesting that OCaml lacks real support for concurrency and message passing. This drives compute and hot data latency requirements through the floor, i.e. very low. Network latency can be a dominating factor. Trading algorithm development and testing can be anywhere in the world but the actual trade execution compute resources must be geo-physically co-located with other trading firms. Eek! Talk about a security nightmare ……

More on New Data Centers – Upfront Cost

Going forward I think we need to focus on the cost achieved by IBM and Yahoo! . While the average cost are now in the range of $1400 per sq. ft. there are some downward pressures to build out cost.

– A Data Center Shortage for Silicon Valley? article in Data Center Knowledge notes that data center developers are focusing their limited capital on fewer data centers. The limited capital is a result of the eonomic downturn. There isn’t a data center shortage today. It remains to be seen if a shortage will develop in the future.

– The economic correction has also exposed floor space over-capacity that was driven by a debt based economy. How will this over capacity be used in the future? As demonstrated by IBM existing capacity can be converted to expanding data center capacity at a lower cost.

– It remains to be seen if the predictions of an economic recovery by 2H09/1H10 are realized. The economic community is expecting an L shaped recovery while some are more pessimistic, predicting a W or WW for many years.

In summary expect data center cost to trend lower as over-capacity is leveraged and economic uncertainties put pressure on enterprises to cut cost on both the CapEx and OpEx spending. If the economic uncertainties continue look for enterprises to shift the time value of money to OpEx as they struggle with the bottom line.

New Data Centers – Upfront Cost

There have been several new data center announcement over the last several months that provide some insight as to the cost of ‘infrastructure’. This is the upfront CapEx required before the on-premise or Cloud services can be offered. The following chart is based on public announcements of investment cost and the footprint of that investment.

More on this later.

A perspective on McKinsey’s ‘Discussion Document’

The “discussion document” released by McKinsey and Company, “Clearing the air on cloud computing”, attempts to reign in some of the hype related to Cloud Computing but has also lowered the expectations too far. By restricting the definition of a Cloud to hardware and focusing on hardware cost alone, several other value propositions of “Cloud” computing were ignored. As today’s HW cost of computing approach historical lows the primary components of IT cost are software, deployment, support, performance and increasingly, energy footprint.

First, let’s review the definition of Cloud. “Cloud” has the same diversity and abuse of definition that “Virtualization” once experienced. Under Amazon’s “cloud” we find Application resources, OS resources, Compute resources, Object Storage resources, Block Storage resources, Tunable Storage resources and as of 11/19/08, Content Delivery resources. For Amazon the definition of “cloud” is tied to extreme virtualization of all these resources such that 3rd parties can tap into each to assemble, develop and deliver a solution. It Does Not mean that every resource under this definition of a cloud is the same. There are multiple resources that any enterprise, whether small medium or large, can leverage.

It is not clear that restricting the definition of a cloud serves a productive purpose. A singular definition could be the basis for a standard that promotes interoperability but in reality cloud providers would be less than supportive of a common “API” as they enhance the vertical product offerings. As we surf the Gartner hype curve competition is driving down the cost of the Cloud while driving up available services, both horizontally and vertically.

To IT departments the full definition that Cloud Computing offers more than just off-premise hardware. Clouds range from infrastructure to applications platforms. The challenge is to develop a comprehensive strategic plan that reflects a continuum of solutions from on-premise to off-premise computing and mapping those solutions onto the array of business applications. In reality the optimal solution lies somewhere on this solution continuum.

But why explore the alternatives if Cloud cost are too high? Cloud computing is a disruptive vector whose advantages can be greatly magnified by economic crisis. As noted by some economist world GDP may have declined to lower levels and recovery is several years out. Texas Instruments most recent quarterly report noted during the Q&A that “our customers are lining up their orders to end demand – at an overall level lower than it has been for quite sometime.” Inventory corrections are nearing an end but they do not expect volume and growth to return to pre-crisis levels. Enterprise cost must be brought in line with new lower revenue levels. IT departments will feel the pressure to reduce cost while supporting growth initiatives. Cloud, in its full definition, offers a viable alternative to traditional strategies.

On a pure hardware basis a highly utilized on-premise infrastructure will be cost competitive. As noted in the McKinsey analysis on-premise hardware cost may be 40% of Amazon’s EC2. But as already noted IT cost are more then just hardware and are a function of many metrics. A Merrill Lynch research note highlighted that OnDemand applications like Salesforce.com can be provisioned for as little as $300-500 per subscriber after fully costing hardware, software and service vs. as much as $8,000-10,000/user for On-Premise client server applications. Even then this cost delta fails to include the lost opportunity of forgone revenue as a on-premise data center is provisioned.

Is the on-premise / off-premise answer black and white? No. The best solution for medium and large business most likely begins with on-premise computing and grows towards a hybrid multi-premise solution. This adds significant complexity to the decision process that is a moving target. Predictive technologies can be applied to drive a competitive IT strategic roadmap and enable continuous optimizations as Cloud Computing matures. Better to have a clear understanding of the landscape than be lost in the Clouds.

Intro

Welcome to RS Performance! RS stands for Right Size, or in this case the best solution for your IT capacity and performance needs. As the IT landscape experiences the combined disruptive forces of Web x.0 and the economic cycle of 2008/9/? there is opportunity to right size your CapEx and OpEx investments.

The challange – what do you do? We hope to help you answer that question as RS Performance.

Stay tuned . . . .