Musings on personal and enterprise technology (of potential interest to professional technoids and others)

Wednesday, September 30, 2009

IBM/Lotus Enterprise Microblogging Debuts: What about Microsoft?

Is IBM onto something here, ahead of Microsoft? Indeed, Lotus-based organizations should see benefit from the secure enterprise microblogging as described below. Seems organizations based on Exchange would similarly benefit, if the upcoming Exchange 2010 release would include these "publish/subscribe dialogues" within the enterprise, similar to Twitter (but no indication of that from the Exchange 2010 Beta so far). Exchange's unified messaging since Exchange 2007 may be useful and have significant benefits, but IMHO not as simple and easy to use as a Twitter-like service could/should be, if integrated intelligently into the enterprise architecture. Here's further information courtesy of IBM Enterprise Microblogging Debuts - Technology For Change:

"Lotus Connections now boasts a Twitter-like service that lets employees establish and maintain publish/subscribe dialogues within their organizations.

At its Center for Social Software symposium being held this week in Cambridge, Mass., IBM announced the release of a microblogging and file-sharing facility bolstering the Lotus Connections suite of enterprise social networking tools. The Twitter-like service enables employees to establish and maintain publish/subscribe dialogues within their organizations.
Additional features added to Lotus Connections include support for iPhone and Nokia S60 mobile devices, including microbrowser access to Profiles, Activities and the Lotus Connections blogging tool..."

Friday, September 25, 2009

Measurement + Monitoring of Data Centre Energy Use Immature Through 2011: Gartner

As previously posted, "cheap" servers really aren't cheap at all (Infoworld 5/2008), since recurring server operational costs are often as large or larger than the initial hardware capital investment. Below is a related update from Gartner, reminding us that "you can't manage what you can't measure". Here is Gartner's explanation of the importance of metrics for reducing data-center energy consumption [emphasis mine]:

Gartner Says Measurement and Monitoring of Data Centre Energy Use Will Remain Immature Through 2011:
“...when asked which energy management metrics they will use in the next 18 months, 48 per cent of respondents have not even considered the issue of metrics. However, without metrics it is impossible to get accurate data, which is essential to evaluating basic costs, proportioning these costs to different users and setting policies for improvement.

'These metrics form the bedrock for internal cost and efficiency programmes and will become increasingly important for external use', said Mr Kumar. 'Organisations that want to publicise their carbon usage through green accounting principles will need to have their basic energy use continuously monitored.'

Mr Kumar also urged organisations not to rely on internal metrics saying that evaluating server energy needs to be done in an open and transparent manner...”

hat tip / source:

Wednesday, September 16, 2009

Mashable: Carrier Pigeons beat ADSL, but Memory Card capacity growing even faster

'Winston' the racing pigeon after flying with a 4GB SD card in Durban, South Africa.(EPA/STR/Corbis, as per THE WEEK)

Mashable: "...Internet lines getting faster, but memory card capacity is getting bigger even faster. If you need your data to travel fast, use a carrier pigeon."

Of course, this is not the only situation where "low tech" wins over conventional methods... e.g. in some situations and for some audiences, email reliability (or lack thereof) cannot match the simplicity and immediate hardcopy accessibility of conventional fax transmissions. In any case, IMHO this is at least worth a smile :)

Full article from Mashable:

Tuesday, September 1, 2009

Gmail outage resolved thanks to flexible technical architecture [Official Gmail Blog]

Google's Site Reliability Czar: A clear explanation of today's gmail outage, as quoted below.

IMHO, as painful and widespread as the outage was (a side-effect of "routine upgrades"), urgent and ultimately successful action was taken towards resolution. And of course, the true "secret sauce" that enabled this resolution, is the underlying foundation of the "flexible capacity" which "is one of the advantages of Google's architecture".

Here is the full Official Gmail Blog post:

Official Gmail Blog: More on today's Gmail issue: Tuesday, September 01, 2009 6:59 PM
Posted by Ben Treynor, VP Engineering and Site Reliability Czar

"Gmail's web interface had a widespread outage earlier today, lasting about 100 minutes. We know how many people rely on Gmail for personal and professional communications, and we take it very seriously when there's a problem with the service. Thus, right up front, I'd like to apologize to all of you — today's outage was a Big Deal, and we're treating it as such. We've already thoroughly investigated what happened, and we're currently compiling a list of things we intend to fix or improve as a result of the investigation.

Here's what happened: This morning (Pacific Time) we took a small fraction of Gmail's servers offline to perform routine upgrades. This isn't in itself a problem — we do this all the time, and Gmail's web interface runs in many locations and just sends traffic to other locations when one is offline.

However, as we now know, we had slightly underestimated the load which some recent changes (ironically, some designed to improve service availability) placed on the request routers — servers which direct web queries to the appropriate Gmail server for response. At about 12:30 pm Pacific a few of the request routers became overloaded and in effect told the rest of the system 'stop sending us traffic, we're too slow!'. This transferred the load onto the remaining request routers, causing a few more of them to also become overloaded, and within minutes nearly all of the request routers were overloaded. As a result, people couldn't access Gmail via the web interface because their requests couldn't be routed to a Gmail server. IMAP/POP access and mail processing continued to work normally because these requests don't use the same routers.

The Gmail engineering team was alerted to the failures within seconds (we take monitoring very seriously). After establishing that the core problem was insufficient available capacity, the team brought a LOT of additional request routers online (flexible capacity is one of the advantages of Google's architecture), distributed the traffic across the request routers, and the Gmail web interface came back online.

What's next: We've turned our full attention to helping ensure this kind of event doesn't happen again. Some of the actions are straightforward and are already done — for example, increasing request router capacity well beyond peak demand to provide headroom. Some of the actions are more subtle — for example, we have concluded that request routers don't have sufficient failure isolation (i.e. if there's a problem in one datacenter, it shouldn't affect servers in another datacenter) and do not degrade gracefully (e.g. if many request routers are overloaded simultaneously, they all should just get slower instead of refusing to accept traffic and shifting their load). We'll be hard at work over the next few weeks implementing these and other Gmail reliability improvements — Gmail remains more than 99.9% available to all users, and we're committed to keeping events like today's notable for their rarity."