Sometimes, you know you didn’t pay enough

… (and sometimes you probably suspect you paid too much)

A common trait in western cultures is the eye for a good deal – you know, getting two-for-the-price-of-one, or thinking that it’s worth buying something because it’s on sale and you’ll save 25%, rather than because you really need it or wanted it beforehand.

I saw a quotation the other day which set me thinking… John Ruskin, a leading 19th-century English artist, all-round intellectual and writer on culture & politics, said:

“There is hardly anything in the world that someone cannot make a little worse and sell a little cheaper, and the people who consider price alone are that person’s lawful prey. 

It is unwise to pay too much, but it is also unwise to pay too little. 

When you pay too much, you lose a little money, that is all. When you pay too little, you sometimes lose everything because the thing you bought is incapable of doing the thing you bought it to do. 

The common law of business balance prohibits paying a little and getting a lot… It can’t be done.

If you deal with the lowest bidder it is well to add something for the risk you run. 

And if you do that you will have enough to pay for something better.” — John Ruskin (1819-1900)

This is something that maybe executives at Mattel toys are mulling over right now, but it’s probably a valuable lesson to any consumers about the risk of going for the absolute cheapest in every sense, regardless of price point.

There’s probably an economic principle to explain all this, but I’ve no idea what it’s called

As it happens, I’ve been getting back into cycling recently and that’s required me to spend a great deal of time and money poring over bikes & accessories, whilst learning about all the differences between manufacturers, model ranges etc.

In short, they’re all much of a muchness. Just like computers, consumer electronics, or cars – is last year’s model really so inferior to the all-shiny new one, that it’s worth paying the premium for the up-to-date one? And how can a single manufacturer make such a huge range of related product and still retain its aspired brand values? (quality, excellence, durability, performance, blah blah blah)

I’ve pretty much come to the conclusion that for any individual at any point in time, there is a point where whatever it is you’re looking at is just too cheap, too low-spec for your needs. Sure, I can buy a A graph for illlustrative effectmountain bike for £50 in supermarkets or junk shops, but it’ll be heavy and not as well screwed together as a more expensive one I might get from a good cycle shop.

There’s a similar principle in all sorts of consumer areas – like wine, as another example. It’s possible to buy wine at £3 a bottle, but it’s going to be pretty ropey. £5 and up and you start getting really noticeable improvements – maybe a £6 bottle of wine could be considered 5 times better than a £3 bottle, though it’s unlikely that this will carry on – at some point, you’ll pay double and the more expensive product will hardly be any better to most people, but for someone, that might be the mid-point in their curve which would stretch from too cheap at one end, too expensive at the other, with a nice middle flat bit where they really want to be.

The far end of that curve would be the point where buying something too expensive will be wasted – if I only need the mountain bike to go to the shops on a Sunday morning for the newspapers, I could do without a lot of the lightweight materials or fancy suspension that a better bike would have. Ditto, if I’m an average cyclist, I won’t need a top-of-the-range carbon bike since it won’t make any difference to my “performance” (though try saying that to all the golfers who regularly sink their salaries into buying all the latest kit, without having any meaningful impact on their game).

Maybe it won’t be “wasted”, but I just won’t have any way of judging compared to other products in its proximity – if I’m in the market for a MINI and yet looked at the comparative price difference of a Ferrari and an Aston Martin, I wouldn’t rationally be able to say that one is better and worth the premium over the other.

So what does any of this have to do with software?

A two-fold principle I suppose: on one hand, maybe you don’t need to buy the latest and greatest piece of software without knowing what it will do for you and why. Or if you do buy the new version, have you really invested any effort into making sure you’re using it to its maximum potential?

Look at the new version of Microsoft Office, with the much-discussed “Ribbon” UI (actually, this link is a great training resource – it can show you the look of the Office 2003 application, you click on an icon or menu item, and it will take you to the location of the same command in the new UI).

The Ribbon scares some people when they see it, as they just think “all my users will need to be re-trained”, and they maybe ask “how can I make it look like the old version?”.

The fact that the Ribbon is so different gives us an excellent opportunity to think about what the users are doing in the first instance – rather than taking old practices and simply transplanting them into the new application, maybe it’s time to look in more depth about what the new application can do, and see if the old ways are still appropriate?

A second point would be to be careful about buying software which is too cheap – if someone can give it away for free, or it’s radically less expensive than the rest of the software in that category, are you sure it’s robust enough, that it will have a good level of backup support (and not just now, but in a few years’ time?) What else is the supplier going to get out of you, if they’re subsidising that low-cost software?

Coming back to Ruskin: it’s quite ironic that doing a quick search for that quote online reveals lots of businesses who’ve chosen it as a motto on their web site. Given that Ruskin was an opponent of capitalism (in fact he gave away all the money he inherited upon his father’s death), I wonder how he would feel about the practice of many companies using his words as an explanation of why they aren’t cheaper than their competitors?

Keep the Item count in your mailbox low!

I’ve been doing a little digging today, following a query from a partner company who’re helping out one of their customers with some performance problems on Exchange. Said customer is running Exchange 2000, and has some frankly amazing statistics…

… 1000 or so mailboxes, some of which run to over 20Gb in size, with an average size of nearly 3Gb. To make matters even worse, some users have very large numbers of items in their mailbox folders – 60,000 or more. Oh, and all the users are running Outlook in Online mode (ie not cached).

Now, seasoned Exchange professionals the world over would either be shrugging saying that these kind of horror stories are second nature to them (or fainting at the thought of this one), but it’s not really obvious to the average IT admin *why* this kind of story is bad news.

When I used to work for the Exchange product group (back when I could say I was still moderately technical), I posted on the Exchange Team blog (How does your Exchange garden grow?) with some scary stories about how people unknowingly abused their Exchange systems (like the CEO of a company who had a nice clean inbox with 37 items, totalling just over 100kb in size… but a Deleted Items folder that was 7.4Gb in size with nearly 150,000 items).

Just like it’s easy to get sucked into looking at disk size/capacity when planning big Exchange deployments (in reality, it’s IO performance that counts more than storage space), it’s easy to blame big mailboxes for bad performance when in fact, it could be too many items that cause the trouble.

So what’s too many?

Nicole Allen posted a while back on the EHLO! blog, recommending 2,500-5,000 maximum items in the “critical path” folders (Calendar, Contacts, Inbox, Sent Items) and ideally keep the Inbox to less than 1,000 items. Some more detail on the reasoning behind this comes from the Optimizing Storage for Exchange 2003 whitepaper…

Number of Items in a Folder

As the number of items in the core Exchange 2003 folders increase, the physical disk cost to perform some tasks will also increase for users of Outlook in online mode. Indexes and searches are performed on the client when using Outlook in cached mode. Sorting your Inbox by size for the first time requires the creation of a new index, which will require many disk I/Os. Future sorts of the Inbox by size will be very inexpensive. There is a static number of indexes that you can have, so folks that often sort their folders in many different ways could exceed this limit and cause additional disk I/O.

One potentially important point here is that any folder, when it gets really big, is going to take longer to process when it fills up with items. Sorting or any other view-related activity will take longer, and even retrieving items out of the folder will slow down (and hammer the server at the same time).

Oh, and be careful with archiving systems which leave stubs behind too – you might have reduced the mailbox size, but performance could still be negatively affected if the folders have lots of items left.

Laptop melts, for once it wasn’t the battery

Here’s a funny – it happened a while back, but I was sent a link to this story today. The author kept her laptop in the oven when she wasn’t at home, since it was a high-crime area and it seemed a non-obvious place for a laptop to live…

Postmeltdown2_1Then one day she came home and her partner was cooking french fries… and presumably hadn’t looked in the oven before switching it on 🙂

I suppose it makes a change that the laptop was melted by external factors, rather than the battery causing some internal pyrotechnics.

Even more amazing: the thing booted up and worked just fine!

OCS2007 trial edition now available

If you want to get your hands on trial software for the recently-released Office Communications Server 2007 and its client, Office Communicator 2007, then you’re in luck…

Bear in mind that these trials are for tyre-kicking and lab testing only – don’t put them into full blown production. They will also expire in 180 days, though can be upgraded to the released and fully supported code.

Plain text, RTF or HTML mail?

Here’s an interesting question that I was asked earlier today; I can’t offer a definitive answer, but these are my thoughts. If you have any contradictory or complimentary comments, please comment or let me know.

“Can RTF/HTML Mail be as safe as plain text with regard to viruses/malware etc?”

Theoretically, I think plain text will always be safer since there’s less work for the server to do, and there’s no encoding of the content other than the real basics of wrapping up the envelope of the message (eg taking the various to/from/subject fields, encapsulating the blurb of body text, and turning it into an SMTP-formatted message).

Where things could get interesting is that plain text still allows for encoding of attachments (using, say, MIME or UUENCODE), which could still be infected or badly formed – so the risk level of attachments is technically the same (although in an RTF or HTML mail, the attachment can be inline with the text, which might mean the user is more likely to be lured into opening it, if it’s malicious).

There may be some risks from a server perspective in handling HTML mail which mean that a badly formed message might be used to stage a denial of service on the server itself. I heard tell of a case a few years ago when a national newsletter was sent out with a badly formed HTML section, and when the Exchange server was processing the mail to receive it, the store process crashed (bringing Exchange to its knees in an instant).

The downsides with that scenario were:

  • The message was still in the inbound queue, so when the store came back online, it started processing the message again and <boom>
  • This newsletter was sent to thousands of people, meaning that any company that had at least one person receiving that mail, had some instant server-down until they identified the offending message and fished it out of the queue.

This bug in Exchange was identified & fixed, but there’s always the theoretical possibility that since the formatting of an HTML message is more complex, there could be glitches in handling the message (in any email system).

Plain text mail is ugly and so lowest-common-denominator, it’d be telling everyone to save their documents as .TXT rather than .DOC or .PDF.

RTF mail works OK internally, but doesn’t always traverse gateways between Exchange systems, and isn’t supported by anything other than Outlook (ie mailing a user in Domino, they won’t see the rich text).

HTML mail may be slightly larger (ie to do the same content as you’d do with RTF takes more encoding and it’s sometimes a bit bigger as a result), but it’s much more compatible with other clients & servers, offers much better control of layout and traverses other email systems more smoothly.

I’d say HTML mail is the obvious way to go. Anyone disagree?

OCS 2007 RTMs

The title says it all really – Office Communications Server 2007 released to manufacturing on Friday. Mark posted about it then, so I guess it’s official (though there’s not much hoo-haa yet on Microsoft.com).

It’s getting pretty exciting with the use of desktop video (even though it’s nothing new: we’ve had it since Netmeeting in Windows 95 in one shape or another) starting to take off. Gartner Group’s “Hype Cycle” for Comms & Collaboration from last year put Desktop Video firmly on the way into the “Trough of Disillusionment”. I wonder if pervasive camera deployments and using software enabled VoIP through OCS, will lift Desktop Video back onto the Slope of Enlightenment?

Living the dream with Office Communicator 2007

I’ve been a long-time fan of instant messaging and pervasive “presence”, especially the cultural changes it allows organisations to make in order to communicate and collaborate better. As a result, I’ve been really interested to see what’s been happening with Office Communications Server (the soon-to-be-released successor to Live Communications Server).

Around 6 weeks ago, I joined an internal MS deployment of full-voice OCS, meaning that my phone number was moved onto the OCS platform so now I’m not using the PBX at all. It’s been a remarkably cool experience in a whole lot of ways, but it really hits home just how different the true UC world might be, when you start to use it in anger.

I’ve been working from home today, and the fact that my laptop is on the internet (regardless of whether I’m VPNed into the company network), the OCS server will route calls to my PC and simultaneously to the mobile, so I can pick them up wherever. As more and more people are using OCS internally, it’s increasingly the norm to just hit the “Call” button from within Office Communicator (the OCS client) or from Outlook, and not really care which number is going to be called.

brettjo on a Catalina

Here, I was having a chat with Brett and since we both have video cameras, I just made a video call – I was at home so just talked to the laptop in a speakerphone type mode, Brett was in the office so used his wired phone, which was plugged into the PC:

(this device is known internally as a “Catalina” and functions mainly as a USB speaker/microphone, but also has some additional capabilities like a message waiting light, a few hard-buttons, and a status light that shows the presence as currently set on OCS).

It’s a bit weird when you start using the phone and realise that you’re not actually going near a traditional PBX environment for a lot of the interaction. Calling up voice mail, as delivered by Exchange Unified Messaging, is as easy as pressing the “call voice mail” button in Communicator – no need to provide a PIN or an extension number, since the system already knows who I am and I’ve already authenticated by logging in to the PC.

When I use this, the “call” goes from my PC to OCS, then from the OCS server directly to the Exchange server, all as an IP data stream and without touching the traditional TDM PBX that we still have here. A third party voice gateway allows for me to use OCS to call other internal people who are still homed on the PBX system, and to make outbound calls.

Microsoft’s voice strategy of “VoIP As You Are” starts to make a lot of sense in this environment – I could deploy technology like OCS and Exchange UM and start getting immediate benefit, without needing to rip & replace the traditional phone system, at least not until it’s ready for obsolescence.

Here’s an idea of what kind of system is in place – for more information, check out Paul Duffy’s interview with ZDNet’s David Berlind.

The business case for Exchange 2007 – part II

(This is a follow on to the previous post on measuring business impact, and the first post on the business case for Exchange 2007, and are my own thoughts on the case for moving to Exchange 2007). It’s part of a series of posts which I’m trying to keep succinct, though they tend to be a bit longer than usual. If you find them useful, please let me know…)

GOAL: Reduce the backup burden

Now I’m going to start by putting on the misty rose-tinted specs and think back to the good old days of Exchange 4.0/5.x. When server memory was measured in megabytes and hard disk capacity in the low Gbs, there were much lower bottlenecks to performance than exist today.

Lots of people deployed Exchange servers with their own idea of how many users they would “fit” onto each box – in some cases, it would be the whole organisation; in others, it would be as many users as that physical site would have (since good practice was then to deploy a server at every major location); some would be determined by how many mailboxes that server could handle before it ran out of puff. As wide area networks got faster, more reliable and less expensive, and as server hardware got better and cheaper, the bottleneck for lots of organisations stopped being about how many users the server could handle, and more about how many users was IT comfortable in having the server handle.

On closer inspection, this “comfort” level would typically come about for 2 reasons:

  • Spread the active workload – If the server goes down (either planned or unplanned), I only want it to affect a percentage of the users rather than everyone. This way, I’d maybe have 2 medium-sized servers and put 250 users on each, rather than 500 users on one big server.
  • Time to Recovery is lower – If I had to recover the server because of a disaster, I only have so many hours (as the SLA might state) to get everything back up and running, and it will take too long to restore that much data from tape. If I split the users across multiple servers, then the likelihood of a disaster affecting more than one server may be lower, and,  in the event of total site failure, the recovery of multiple servers can at least be done in parallel.

(Of course, there were other reasons, initially – maybe people didn’t believe the servers would handle the load, so played safe and deployed more than they really needed… or third party software, like Blackberry Enterprise Server, might have added extra load so they’d need to split the population across more servers).

So the ultimate bottleneck is the time it takes for a single database or single server’s data to be brought back online in the event of total failure. This time will be a function of how fast the backup media was (older DAT type tape backup systems might struggle to do 10Gb/hr, whereas a straight-to-disk backup might do 10 or 20 times that rate), and is often referred to in mumbo-jumbo whitepaper speak as “RTO” or Recovery Time Objective. If you’ve only got 6 hours before you need to have the data back online, and it takes 20Gb/hr to recover the data from your backup media, then at a maximum you could only afford to have 120Gb to be recovered and still have a hope of meeting the SLA.

There are a few things that can be done to mitigate this requirement:

  • Agree a more forgiving RTO.
  • Accept a lower RPO (Recovery Point Objective is, in essence, the stage you need to get to – eg have all the data back up and running, or possibly have service restored but with no historical data, such as with dial-tone recovery in Exchange).
  • Reduce the volume of data which will need to be recovered in series – by separating out into multiple databases per server, or by having multiple servers.

Set realistic expectations

Now, it might sound like a non-starter to say that the RTO should be longer, or the RPO less functional – after all, the whole point of backup & disaster recovery is to carry on running even when bad stuff happens, right?

It’s important to think about why data is being backed up in the first place: it’s a similar argument to using clustering for high availability. You need to really know if you’re looking for availability, or recoverability. Availability means that you can keep a higher level of service, by continuing to provide service to users even when a physical server or other piece of infrastructure is no longer available, for whatever reason. Recoverability, on the other hand, is the ease and speed with which service and/or data can be brought online following a more sever failure.

I’ve spoken with lots of customers over the years who think they want clustering, but in reality they don’t know how to operate a single server in a well-managed and controlled fashion, so adding clusters would make things less reliable, not more. I’ve also spoken with customers who think they need site resilience, so if they lose their entire datacenter, they can carry on running from a backup site.

Since all but the largest organisations tend to run their datacenters in the same place where their users are (whether that “datacenter” is a cupboard under the stairs or the whole basement of their head office), in the event that the entire datacenter is wiped out, it’s quite likely that they’ll have lots of other things to worry about – like where the users are going to sit? How is the helpdesk going to function, and communicate effectively with all those now-stranded users? What about all the other, really mission critical applications? Is email really as important as the sales order processing system, or the customer-facing call centre?

In many cases, I think it is acceptable to have a recovery point objective of, within a reasonable time, delivering a service that will enable users to find each other and to send & receive mail. I don’t believe it’s always worth the effort and expense that would be required to bring all the users’ email online at the same time – I’d rather see mail service restored within an hour, even if it takes 5 days for the historical data to come back, compared to 8 hours for restoring any kind of service which included all the old data.

How much data to fit on each server in the first place

Microsoft’s best practice advice has been to limit the size of each Exchange database to 50Gb (in Exchange 2003), to make the backup & recovery process more manageable. If you built Exchange 2003 servers with the maximum number of databases, this would set the size “limit” of each server to 1Tb of data. In Exchange 2007, this advisory “limit” has been raised to 100Gb maximum per database, unless the server is replicating the data elsewhere (using the Continuous Replication technology), in which case it’s 200Gb per database. Oh, and Exchange 2007 raises the total number of databases to 50, so in theory, each server could now support 10Tb of data and still be recoverable within a reasonable time.

The total amount of data that can be accommodated on a single server is often used to make a decision about how many mailboxes to host there, and how big they should be – it’s pretty common to see sizes limited to 200Mb or thereabouts, though it does vary hugely (see the post on the Exchange Team blog from a couple of years ago to get a flavour). Exchange 2007 now defaults to having a mailbox quota of 10 times that size: 2Gb, made possible through some fundamental changes to the way Exchange handles and stores data.

Much of this storage efficiency now derives from Exchange 2007 running on 64-bit (x64) servers, meaning there’s potentially a lot more memory available for the server to cache disk contents in. A busy Exchange 2003 server (with, say, 4000 users), might only have enough memory to cache 250Kb of data for each user – probably not even enough for caching the index for the user’s mailbox, let alone any of the data. In Exchange 2007, the standard recommendation would be to size the server so as to have 5Mb or even 10Mb of memory for every user, resulting in dramatically more efficient use of the storage subsystem. This pay-off means that a traditional performance bottleneck on Exchange of the storage subsystem’s I/O throughput, is reduced considerably.

NET: Improvements in the underlying storage technology within Exchange 2007 mean that it is feasible to store a lot more data on each server, without performance suffering and without falling foul of your RTO/SLA goals.

I’ve posted before about Sizing Exchange 2007 environments.

What to back up and how?

When looking at backup and recovery strategies, it’s important to consider exactly what is being backed up, how often, and why.

Arguably, if you have a 2nd or 3rd online (or near-online) copy of a piece of data, then it’s less important to back it up in a more traditional fashion, since the primary point of recovery will be another of the online copies. The payoff for this approach is that it no longer matters as much if it takes a whole weekend to complete writing the backup to whatever medium you’re using (assuming some optical or magnetic media is still in play, of course), and that slower backup is likely to be used only for long-term archival or for recovery in a true catastrophe when all replicas of the data are gone.

Many organisations have sought to reduce the volume of data on Exchange for the purposes of meeting their SLAs, or because keeping large volumes of data on Exchange was traditionally more expensive due to the requirements for high-speed (and often shared) storage. With having more memory in an Exchange server due to it being 64-bit, the hit on I/O performance can be much lower, meaning that a 2007 server could host more data with the same set of disks than an equivalent 2003 server would (working on the assumption that Exchange will have historically hit disk I/O throughput bottlenecks before running out of disk space). The simplest way to reduce the volume of data stored on Exchange (and therefore, data which needs to be backed up and recovered on Exchange), is to reduce the mailbox quota of the end users.

In the post, Exchange mailbox quotas and ‘a paradox of thrift’, I talked about the downside of trying too hard to reduce mailbox sizes – the temptation is for the users to stuff everything into a PST file and have that being backed up (or risk being lost!) outside of Exchange. Maybe it’s better to invest in keeping more data online on Exchange, such that it’s always accessible from any client (unlike some archiving systems which require client-side software, thereby rendering the data unaccessible to non-Outlook clients), not replicated to users’ PCs when running in Cached Mode, and not being indexed for easy retrieval by either the Exchange Server or by the client PC.

NET: Taking data off Exchange and into either user’s PST archive files, or a centralised archiving system, may reduce the utility of the information by making it less easy to find and access, and could introduce more complex data management procedures as well as potential additional costs of ownership.

Coming to a datacenter near you

An interesting piece of “sleeper” technology may help reduce the discussions of backup technique: known simply as DPM, or System Center Data Protection Manager to give it its full title. DPM has been available for a while and targeted at backing up and restoring file server data, but the second release (DPM 2007) is due soon, and adds support for Exchange (as well as Sharepoint and SQL databases). In essence, DPM is an application which runs on Windows Server, that is used to manage snap-shots of the data source(s) it’s been assigned to protect. The server will happily take snaps at timely intervals and can keep them in a near-line state or archive them to offline (ie tape) storage for archival.

DPM 2007-05 graphic B

With very low cost but high-capacity disks (such as Serial-Attached SCSI arrays or even SATA disks deployed in fault-tolerant configurations), it could be possible to have DPM servers capable of backing up many Tbs of data as the first or second line of backup, before spooling off to tapes on an occasional basis for offsite storage. A lot of this technology has been around in some form for years (with storage vendors typically having their own proprietary mechanisms to create & manage the snapshots), but with a combination of Windows’ Volume Shadowcopy Services (VSS), Exchange’s support for VSS, and DPM’s provision of the back-end to the whole process, the cost of entry could be significantly lower.

NET: Keeping online snapshots of important systems doesn’t need to be as expensive as in the past, and can provide a better RTO and RPO than alternatives.

So, it’s important to think about how you backup and restore the Exchange servers in your organisation, but by using Exchange 2007, you could give the users a lot more quota that they’ve had before. Using Managed Folders in Exchange, you could cajole the users into keeping this data more free of stuff they don’t need to keep, and to more easily keep the stuff they do. All the while, it’s now possible to make sure the data is backed up quickly and at much lower cost than would have been previously possible with such volumes of data.

Lelouch’s “C’etait un Rendezvous” gets mashed

RendezvousSomeone has taken the petrol-heads’ classic film, a 9-minute dash through early morning Paris known simply as “Rendezvous“, and built a mash-up between Google Video and Google Maps, to show the route he was taking. Who needs another excuse to watch this film? Well, you’ve got it now.


Rendezvous, if you hadn’t heard the story, was a film shot by French director Claude Lelouch, allegedly driven by a professional driver at the wheel of Lelouch’s Ferrari 275GTB. In reality, it was a Mercedes saloon and it was Lelouch himself driving, and later dubbed the soundtrack (though it does sound pretty realistic to me).


Legend has it that he was arrested immediately following the first showing of the film: no surprise, since what it shows is completely illegal – driving at over 100mph through red-lights, the wrong way down one-way streets etc. It’s still strangely compelling, though, even if you know it’s a bit of a fake…


(thanks to Steve for the link)

BBC iPlayer kicks up a stink

It’s been interesting reading various news articles about the fact that the soon-to-be-released BBC iPlayer application will initially be available only to Internet Explorer and Windows XP users. The Register reports that a group called the Open Source Consortium is due to meet with the BBC Trust since the service will not be available at all to users (for example) of Firefox or Linux OS.

The Guardian‘s coverage points out that the same issues behind the iPlayer are shared with the commercial broadcasters’ services (ie Channel 4 and Sky). Channel 4 says:

Will I be able to access 4oD on my Mac?

Unfortunately not at the launch of 4oD.
This is an industry-wide issue caused because the accepted Digital Rights Management (DRM) system used to protect online video content, which is required by our content owners, is not compatible with Apple Mac hardware and software. The closed DRM system used by Apple is not currently available for licence by third parties and there is no other Mac-compatible DRM solution which meets the protection requirements of content owners. Unfortunately, we are therefore unable to offer 4oD content to Mac users at this stage.

The fact is, all of these services are being required to use DRM since they don’t own much of the content they’re “broadcasting”, and the content owners are saying that they’ll only allow it to be broadcast if it can be protected. And nobody has (yet) built a DRM system that is up to the job of securing the content, for the other platforms in question (with the exception of FairPlay, which Apple won’t license).

Someone from the BBC comments about the fact that the Windows DRM may be a target for hackers…

“We expect it to get broken. When it gets broken, Microsoft releases a new version [of DRM] and the application gets updated. It’s an imperfect solution. But it’s the least imperfect solution of them all.”

So, it’s interesting that the Open Source Consortium is threatening to take this whole thing to the European Union under an anti-trust banner. What’s better – provide an innovative service to 70-85% of the market, or have no service to anyone because the content providers won’t allow it? Sure, the latter example is “fairer” since it doesn’t favour one platform vs another, but is it really in the best interests of the end users…?