WSJ Online Feeds and FeedDemon 1.11.3

If you’re a subscriber to The Wall Street Journal Online, you’ll be happy to hear that a set of WSJ Online feeds is now available. If you’re not a subscriber, you can still access the feeds – but they’ll show only headlines as opposed to full content (hat tip to Kevin H. Stecyk for letting me know about these feeds).

When I first viewed the WSJ feeds in FeedDemon, I discovered that the wrong titles were showing in the channel bar. At first I thought there must be a problem with the feeds themselves, but it turns out the problem was with FeedDemon. So, I’ve posted a new build of FeedDemon which fixes this bug – just visit the download page to get it. As always, existing users should install the new build directly on top of the current build. All of your existing information (including your serial number) will be retained.

More on RSS bandwidth consumption

A few months back I wrote about RSS bandwidth consumption, and this subject is again in the news following Chad Dickerson’s recent InfoWorld column about his love/hate relationship with RSS. Dickerson notes that desktop RSS readers which hit a feed too frequently – and then download the feed even when it hasn’t changed – are resulting in a huge server load.

However, as Dare Obasanjo points out, many of those complaining about RSS bandwidth consumption fail to configure their own servers to address the problem. Dare shows that InfoWorld’s feed supports neither GZip encoding nor conditional HTTP Get, both of which would dramatically decrease RSS bandwidth consumption. The latest RSS reader stats show that all the major ones support these techniques, so make sure your server (and/or the feed itself) supports these techniques. If you have a static feed, chances are your server handles this for you – but if you have a dynamic feed (i.e.: one created on-the-fly with PHP or ASP), you may need to make some changes.

In the past, raising this topic has been followed by naive calls to stop using desktop RSS readers in favor of web-based applications, since web-based aggregators consume less bandwidth. I’m far too biased to argue about desktop vs. web aggregators, but the argument is moot since many people find the UI and feature set of web-based apps too limiting for their needs and will always want a desktop application (witness Outlook vs. HotMail). Arguing for either type of application is pointless, since each will be around for a long time.

BTW, I’m glad to see that Sam Ruby is proposing updating the Atom spec and the feed validator to support HTTP conditional get. My guess is that a lot of bandwidth will be saved once the feed validator warns about feeds that don’t take advantage of the If-Modified-Since and If-None-Match HTTP headers.

Oh, and since I mentioned RSS reader stats, I have to get this off my chest: server stats are not an accurate representation of the popularity of individual RSS readers. A number of RSS readers default to checking for updates every hour, whereas FeedDemon defaults to checking every three hours. So, three times as many people would need to use FeedDemon for it to be ranked equally with these other apps.

Proposed clarification for RSS 2.0 spec (revised)

Two weeks ago I mentioned a proposed clarification for the RSS 2.0 spec, and the proposal has since been revised. While I’m disappointed that the spec couldn’t be re-worded to stipulate that the RSS 2.0 <description> element is always entity-encoded HTML, I’m glad that the link to the examples will remain.

Quite often I receive support questions from people asking why specific feeds don’t look right in FeedDemon, and more often than not the problem is due to the feed author incorrectly encoding HTML entities. Being able to point to an “official” set of encoding examples will be a big help in these cases.