The Future of Feed Reading: What Do YOU Want?

Every now and then I’ll see a blog post predicting the future of feed reading, and invariably it’s written by someone who spends every waking moment reading their feeds.  Which is fine, of course – we certainly want to know what power users expect from the future of RSS.  But predictions from these folks are usually based on what they need from RSS, and their needs don’t always match the needs of the majority.

Most people who use an RSS reader don’t live in it.  They use it to stay up-to-date with the latest news from the blogosphere, to keep tabs on what people they trust are talking about, or simply to kill some time between more important tasks.

This blog post is aimed towards these people – the ones who love their RSS reader, but don’t feel withdrawal symptoms when they don’t use it for a day.

What do you want from your RSS reader in the future?  If you could change the future of feed reading to suit your needs, what would you want that future to look like?

Feed Retrieval Intervals and Non-Updating Feeds

Greg Reinacker recently blogged about NewsGator feed retrieval intervals.  For me, the money quote is this one:

"One of the more common questions/complaints we get is something about a feed not appearing to update in a timely manner. 99% of the time, it’s actually a problem with the feed."

I can certainly attest to the fact that we receive frequent complaints about a specific feed not updating, and when we check into the problem, it’s almost always due to the feed itself.  As Greg explains:

"These are feeds which have returned some kind of error, and they are “penalized” for it. For example – if a feed 404’s, it is immediately penalized for 24 hours. A 500 server error? 4 hours. Other kinds of errors (including parsing problems) cause penalties of varying lengths, taking into account how many consecutive errors we see."

Read the full post if you’re interested in more details.

Why Use a Desktop RSS Reader?

Over the past few years, I've noticed a number of people asking why anyone would use a desktop RSS reader.  These comments generally focused on two points:

  1. Web-based readers are also free, and unlike desktop apps, you can access them from anywhere
  2. Desktop readers have to constantly retrieve feeds, causing unnecessary bandwidth burden on the local client as well as the sites they're downloading from

Both points are easily dismissed by the fact that FeedDemon offers synchronization.  You can read your feeds on multiple computers and have your subscriptions and read items automatically synchronized between them.

And synchronization means that our desktop readers don't retrieve feeds from their source sites.  Instead, they're downloaded through the web-based synchronization engine, which makes feed retrieval exceptionally fast.  Unlike non-synched desktop aggregators, synched readers don't have to download every single feed to see if something's new.  Instead, every few minutes they query the synchronization service to find out whether any of the user's feeds have new content, and if so, they then request the new content (and only the new content) from just those feeds.

Those points aside, there are a number of reasons why many people prefer desktop RSS readers (so much so that they were willing to pay for a desktop reader like FeedDemon despite free web-based alternatives). Long-time FeedDemon user Amit Agarwal did a nice job highlighting some of these reasons in his blog earlier this week, but here are few more:

  1. Most web-based readers can't subscribe to secure feeds.  I don't know about you, but that's a show-stopper for me – I have a number of password-protected feeds that I absolutely have to keep track of.
  2. Web-based readers can't access "behind-the-firewall" feeds.  For example, we have an internal server which runs FogBugz, and I'm subscribed to several FogBugz feeds which alert me to problem reports and inquiries regarding my software.  I can't add these critically important feeds to a web-based reader.
  3. Most web-based readers offer no offline support, and even when they do, offline reading is still far better in FeedDemon (this screencast shows why).  FeedDemon doesn't just download your articles so you can read them offline – it can also prefetch the images they contain and the pages they link to, enabling you to browse the web without an Internet connection.  Your web-based reader can't do that. This is one of those features that you don't think you'll need – until you do.
  4. Many desktop readers are full-fledged web browsers, complete with access to your favorites, tabbed browsing, etc.  In fact, FeedDemon is my web browser – I rarely use an external browser anymore.  If you haven't used a browser that's also a powerful RSS reader, you're missing out.
  5. Desktop readers have access to local resources, enabling a slew of features that aren't available in web-based readers.  For example, desktop readers can integrate with your favorite blogging client, or download podcasts and copy them to your iPod or WMP device.  NetNewsWire even integrates with iPhoto, Twitterrific, Mail, and iCal.
  6. Desktop readers give you a choice about which feeds to keep completely private.  Want your reading habits regarding a subset of your FeedDemon subscriptions kept completely on your local computer?  Just put them in a folder that's not synchronized.
  7. And of course, speed is often another benefit.  Web app performance has become a lot better over the past few years, but we're not at the point where JavaScript in the browser can compete with native performance :)

Now, I'm not knocking web-based readers – after all, we offer one of our own – but people who choose to use a desktop reader have good reasons for doing so.

PS: As I've written before, I think the so-called battle between web and desktop apps is overblown.  It's a hybrid world, not an either-or situation.

How Does FeedDemon Calculate Attention?

In a recent blog comment, Paul M. Watson asked:

"I’d be interested in more detail on how you compute the scores [which determine a feed’s attention]. Nothing that gives away your competitive edge of course but just some generalizations of what you are tracking that amounts to attention."

FeedDemon’s algorithm for determining a feed’s attention rank has changed since I first wrote about it, but it’s still very simple.  I certainly don’t think I’ll be giving away any competitive edge by posting details, so here it is:

Feed Attention Rank =
(NumFeedVisitsExplicit div 2)
+ (NumFeedVisits div 4)
+ (NumPostVisits div 5)
+ (NumFollowedLinks div 3)
+ (NumEnclosureVisits div 2)
+ (NumPostsEverFlagged * 2)
+ (NumPostsEmailed * 2)
+ (NumPostsAddedToNewsBins * 2)
+ (NumPostsAddedToSharedNewsBins * 3)
+ (NumPostsAddedToWatches)

Where:

   NumFeedVisitsExplicit = #times user visited a feed by explicitly clicking it
NumFeedVisits = #times user visited a feed through automatic navigation (ex: clicking "Next")
NumPostVisits = #times a post in that feed was visited
NumFollowedLinks = #times an external hyperlink inside a post was clicked
NumEnclosureVisits = #enclosures (podcasts) downloaded from the feed
NumPostsEverFlagged = #posts user ever flagged
NumPostEmailed = #posts forwarded via email
NumPostsAddedToNewsBin = #posts added to a clippings folder ("news bin" in v2.5)
NumPostsAddedToSharedNewsBin = #posts added to a clippings folder that has a shared RSS feed
NumPostsAddedToWatches = #posts picked up by a FeedDemon watch

One of Paul’s concerns was that high output blogs which he skims through without reading would get ranked too highly. I attempt to counteract this in several ways, with admittedly mixed success.  The most obvious way is by giving post visits the lowest weight in the algorithm (NumPostVisits div 5). And I give the highest weight to actions such as flagging, clipping or emailing a post, since those actions are proof that you find the post valuable.

One potentially important thing that’s missing here is that I don’t "decay" attention over time, but in reality this happens automatically.  For example, if you stop paying attention to a feed that has a high attention rank, its rank will stop increasing, whereas the rank of feeds you do still pay attention to will continue to increase.

This is illustrated by the screenshot from my recent post about the attention report in FeedDemon 2.6, which shows that I was paying the most attention to the feed for the TopStyle Support Forum (since TopStyle 3.5 was in beta at the time).  Now that TopStyle 3.5 has been released and I’m working on FeedDemon 2.6, the TopStyle feed has fallen to second place behind my feed for the FeedDemon Support Forum:

I’m curious as to how accurate FeedDemon customers find the new attention report.  Does it for the most part reflect the attention you’re paying to your feeds, or do you find it wildly out of sync with the feeds you’re really paying attention to?

NewsGator’s Free iPhone RSS Reader Updated

For the past few weeks I’ve been using – and loving – a beta version of our iPhone RSS reader, and according to our mobile guru Kevin Cawley, the new version went live last night.  The updated version is even faster than before, and has some really nice additions.

My favorite improvement is how it’s smart enough to return to your list of feeds after you mark the last item in a feed or folder as read (I’m a fan of anything that saves a click, especially on a mobile device).  And I love how articles I clip on the iPhone reader automatically show up in FeedDemon’s synched clippings.

If you’re looking for a great RSS reader for your iPhone (or any mobile device, for that matter), give it a try at http://m.newsgator.com/

Rex Hammock: FeedDemon’s "Popular Topics" is like a personalized Techmeme

My friend Rex Hammock writes about FeedDemon’s "Popular Topics":

"The FeedDemon feature is, in effect, a meme-tracker. However, instead of analyzing news stories and relevant blog posts that are being linked to by a mysterious universe of topical-bloggers (or folks trying to game it), the feature analyzes the stories that are being linked to by those in a network of bloggers you choose — those to whose RSS feeds you’ve subscribed…In other words, it’s like having a Techmeme that is “memetracking” topics important to just those bloggers you desire to follow, rather than all bloggers who post on the topic."

I have to say, it’s nice to see popular topics getting some attention.  The first time I showed this feature to anyone was at BloggerCon IV in 2006, and I remember Chris Pirillo being impressed by it.  Popular topics has been greatly improved since then, especially in the new FeedDemon 2.6 pre-release, but I’ve seen very few comments about it.

IMO, a "personal memetracker" like FeedDemon’s popular topics is a killer feature for aggregators.  It’s something we should expect in our RSS readers, because the more information we subscribe to, the more we need a feature that shows us what the people we’re paying attention to are paying attention toDare Obasanjo considers it one of the top five features for the next version of RSS Bandit, and he nails why it’s important (emphasis mine):

"I’m now officially at the point where I don’t have enough time to read all the feeds I have in my subscription list anymore. For the most part, I’ve gotten around this by browsing programming.reddit, Techmeme and Sam Ruby’s MeMeme about once or twice a day. Although they are all great, the problem I have is that there are parts of the blogosphere that none of these sites is good at tracking. For example, none of these sites is really on top of the Microsoft employee blogosphere which I’m interested in for obvious reasons.  I’ve been talking about building a feature similar to FeedDemon’s popular topics for a long time but I’ve now gotten to the point where I don’t think I can get a lot of value out of my blog subscriptions without having this feature."

I’m willing to bet that over the next year or two we’ll see personal memetrackers appear in more desktop aggregators, and despite the computational and scaling problems, it wouldn’t surprise me if more web-based aggregators also offered this feature.

FeedDemon and RSS Comments

Just noticed that Dave Winer is wondering which feed readers support the RSS 2.0 comments element, so I thought I’d chime in and mention that FeedDemon is among the many aggregators which support it.  When an item contains a comments element, FeedDemon displays a "comment bubble" icon which links to the page containing the comments for that item.

In addition to the standard RSS 2.0 comments element, FeedDemon also supports wfw:comment, wfw:commentRss, slash:comments and the Atom threading extensions.

When an item uses slash:comments or the Atom threading extensions, FeedDemon displays a comment count next to the comment icon, and the count is refreshed when the feed updates.  Here’s an example from my feed, which shows an item with four comments:

Finally, FeedDemon displays the orange feed icon with a small comment bubble superimposed on it for items which expose a separate feed for comments on that item (via wfw:commentRss or an Atom "replies" link).  Subscribing to the comment feed in FeedDemon is as simple as clicking this icon.  Here’s an example from Sam Ruby’s feed showing both a comment count and a comment feed:

The Best Way to Increase Your Feed Readership…

…is to use great titles.  Seriously.

Here’s an example: Steve Rubel’s "The Web 2.0 World is Skunk Drunk on Its Own Kool-Aid" rant caught my eye yesterday because of its great title.  After reading Rubel’s post, I added it to my link blog, where it was spotted by Steven Hodson.  As Hodson writes, he unsubscribed from Rubel’s feed a while ago, but he just resubscribed based on the strength of that one post – and I’ll wager that the post’s title is why it got his attention.

As people subscribe to more feeds, the more they stop reading every unread item and instead just skim the titles looking for something that interests them.  If you use boring titles for your posts, skimmers like myself are likely to skip right over them.

In addition, once people get used to reading feeds, they start subscribing to link blogs and search feeds which aggregate content from all over the web.  People who aren’t subscribed to your feed often find you through these aggregate feeds, and it’s the strength of your titles that leads them to read what you have to say.

Now, I’m not about to recommend using sensationalist, "National Enquirer"-like titles – that would just pollute your name/brand, leading people to unsubscribe from your feed.  But descriptive, catchy titles get the attention of readers who might otherwise never see your words of wisdom.

So if you’re going to take the time to write a blog post, make sure to also take the time to give it a good title.  Yeah, I know that sounds painfully obvious, but a quick glance of your unread items should provide plenty of examples of interesting posts that go ignored because of lousy titles.

NOINDEX at the Item Level

Dave Winer writes about how he’d like a way to exclude specific items in his RSS feed from appearing on TechMeme, and suggests a TechMeme namespace for RSS as one possibility.

Rather than create a TechMeme-specific namespace, I’d prefer to see the existing noindex meta tag adapted for use on a per-item basis. For example, right now you can add this to your feed to prevent search engines from spidering it:

<xhtml:meta xmlns:xhtml="http://www.w3.org/1999/xhtml&quot; name="robots" content="noindex" />

I use this on the feed for my link blog (since my link blog contains items from other feeds, using noindex helps feed search engines prevent duplication) and Yahoo and Google both honor it.

So how about we adapt this for use on a per-item basis, so that individual items can be excluded without excluding the entire feed? Search engines and sites like TechMeme could simply ignore any items that are flagged with noindex.

Of course, this approach wouldn’t prevent only TechMeme from indexing an item, so it doesn’t entirely fulfill Dave’s request. But if preventing a specific site from indexing an item is something that feed creators want, then perhaps a user-agent attribute is needed (similar to the User-agent line in robots.txt).