The End of Glassboard

Justin Williams brings the unfortunate but inevitable news about the end of Glassboard:

Over the last year we have tried a variety of different methods of converting Glassboard into a sustainable business. The reality is that we failed to do that.

Starting next week, we will be converting everyone’s account to a premium account for the remaining few weeks so that you can export your boards and keep an archive should you desire.

I’m sad to see Glassboard go away. It was the first Android app I wrote, and I have great memories with the team of friends I worked with at Sepia Labs. I feel bad for Justin that he invested so much time and money in keeping Glassboard alive only to see it fail to gain traction.

For me personally, Glassboard was a reaction to the whittling away of online privacy. I’m proud to have worked on something that said “privacy is important” at a time when so many other apps were sharing, leaking, and even stealing, your private information. None of us foresaw Edward Snowden, of course, but we did foresee a backlash against the loss of privacy which I believe is still in its infancy.

It would be easy to blame Glassboard’s failure on users’ lack of concern for their privacy, but I think it had more to do with our flawed initial experience and downright terrible business model.

Our initial user experience made it hard to get started with the app, which killed any chance of the viral growth necessary to build a large user base. And we did so little to promote our premium version that very few Glassboard users knew we even had a premium version (and those that were aware of it saw little reason to upgrade).

There’s certainly no guarantee Glassboard would’ve succeeded had we not made those mistakes – as Brent Simmons points out, an app like Glassboard “is going to be a challenge no matter what” – but I do think those mistakes guaranteed it wouldn’t succeed.

A New Home for Glassboard

for-sale-by-ownerThree years ago I joined a team that set out to create something different: a social app that values your privacy. We wanted to make an app which enabled you to share only with people you trust – no privacy settings necessary.

We’d all seen the coming storm: we knew that concerns about privacy were the next big thing. We wanted to build an app that said, “privacy is important, don’t give it up.”

But we failed.

Glassboard didn’t succeed, and is in need of a new home.

Our biggest blunder was that our focus on privacy made it hard to get started with Glassboard. Twitter, Facebook, Instagram, etc., let you immediately find and follow your friends, but after you installed Glassboard you felt like you were the only one using it. Not exactly a great first run experience.

We also had a lousy business model, but that’s a post for another day.

I continue to believe that we need more software that respects your privacy. I look at the current crop of privacy-violating social apps and shudder when I think of where they’ll take us a decade or two down the road. Or even next year.

I’d love to see Glassboard taken over by someone who shares our commitment to privacy and can take it where we never could.

Girls Around Me Shows Why Privacy Shouldn’t be an Option

Earlier this week I wrote that privacy shouldn’t be an option. Privacy – like security – should be expected, not something that users have to enable.

Need proof? Just read about Girls Around Me, an incredibly ill-conceived app that takes advantage of women who don’t know they need to configure the privacy settings in the social software they use.

Software that isn’t private by default assumes users know how to make it private, which is an unrealistic – and, in the case of “Girls Around Me,” potentially dangerous – assumption.


I write the Android version of Glassboard, an app designed for private conversations. Find out more at glassboard.com.


Privacy is not an Option

Years ago, in the Dark Ages of desktop software, security was an option. People used software that was insecure by default, but if they knew where to look they could turn on various options that made the software secure.

Microsoft Outlook used to be like that: by default it would allow viruses to be emailed to you, but you could configure it to be secure if you knew where the security options were.

Then people started getting all sorts of nasty viruses via email, and Microsoft wised up. They stopped treating security as opt-in and started making their software secure by default.

Fast forward to today and we’re seeing a similar situation with privacy.

By default most social software isn’t private – it’s configured to share everything about you, not just with people you know but also with advertisers. You have to figure out where the privacy settings are – and what they mean – if you want the software to respect your privacy.

And as with the opt-in security settings of the past, today’s opt-in privacy settings are leading to all sorts of problems. Every day we see headlines about privacy violations that could’ve been avoided if we used software that didn’t treat privacy as an option.

Software developers need to look at privacy the same way we’ve learned to look at security: it’s not an add-on or a feature that customers have to turn on, it’s something built-in that shouldn’t be turned off.


I write the Android version of Glassboard, an app designed for private conversations. Find out more at glassboard.com.


Digital Ownership and the Path to Privacy

What does it mean to "own" something that exists only in digital form? If the answer is we don't really own things that are digital, then does that mean we don't own our private information when it's merely bits of data?

Those questions reflect our inability to value non-physical things. We as customers look at digital goods as less worthy of monetary value, and companies look at customer data as less worthy of privacy.  In both cases, we de-value things that we can't touch.

There are plenty of examples of companies who believe the rules of privacy and ownership are different online than they are in the physical world.  Mobile apps upload our address books without permission, websites track us without our knowledge, media corporations secretly install rootkits on our computers, and online stores sell us digital goods we thought we owned but merely leased instead.

Yes, the tech and entertainment industries pretend they value digital items when they rail against piracy, but they suddenly get fuzzy when it comes to valuing our digital rights.

Yet piracy also reflects how we as customers value digital data. Many of us pay for music, movies and software when it comes in a box but steal it in digital form, as though the real value of a piece of music is in its packaging instead of in its artistry.

We lash out against companies that violate our privacy, yet fail to see how our unwillingness to value their digital goods in some small way led to the prevalence of a business model that gives the actual product away and earns money by selling our personal information.

And we never noticed that in order to own all this free stuff, the free stuff gets to own us back.

Let’s Not Screw Up Mobile Privacy

This week we saw headlines about mobile apps that violate your privacy by uploading your address book without permission.

These kind of "mistakes" helped kill demand for desktop software, and I’d hate to see history repeat itself with mobile software.

Several years ago I wrote that desktop software is paralyzed by fear due to all the frightening warnings that show up when you try to install something. It wasn’t just the rise of Web apps that led people away from desktop apps: it was also because installing desktop apps became too damn scary.

The same thing could happen to mobile software. Repeated privacy violations will force mobile OS vendors to show more warnings, scaring customers away from trying new apps.

Mobile developers who want to avoid that possible future should accept the idea that data on the device must stay on the device unless the user has given permission to upload or share it. Being able to access data on the device in no way implies ownership of that data.

The Friction in Frictionless Sharing

Facebook claims that frictionless sharing makes sharing easier. They’ve improved the usability of sharing by taking away the friction.

So let’s look at it from a usability perspective.

This is an oversimplification, but we can think of frictionless sharing as an attempt to replace something like this:

With something like this:

Instead of requiring the user to confirm every single article they choose to share, just give them a one-time dialog that enables them to share everything down the road.

That’s a lot less work for the user, right?

Well, no, not really. Because in the past the user only had to decide whether to share something they just read, but now they have to think about every single article before they even read it. If I read this article, then everyone will know I read it, and do I really want people to know I read it?

That creates more friction, not less.

And let’s not forget the friction the user experiences as they browse around the Web. Now they have to remember which sites are automatically sharing what they read. Did I allow a Facebook app to share what I read on this site? I don’t remember, so I’d better not click that link.

So frictionless sharing isn’t frictionless after all. All it does is trade the small friction of having to choose what to share with the large friction of having to think about whether what you’re about to do will be shared.

Privacy is Important

“You have zero privacy anyway. Get over it.” – Scott McNealy, former CEO of Sun Microsystems.

When I was invited to join Sepia Labs and create the Android version of Glassboard, I stressed that privacy was the key to our success. Companies like Facebook and Google are trying to convince millions of us that we can trust them with our privacy, but millions of us remain unconvinced.

These companies make the majority of their revenue from advertising, and advertisers are willing to pay more when they know exactly who their ads will be shown to. We’re expected to trust our private conversations with companies that don’t benefit from keeping our conversations private. Red flag, anyone?

“If you have something that you don’t want anyone to know, maybe you shouldn’t be doing it in the first place.” – Eric Schmidt, Google’s executive chairman and former CEO.

It’s not that we fear saying things that we don’t want anyone to know, it’s that we fear saying something without knowing who will hear it.

We want to be able to say something online without fearing that a future employer may see it and count it against us. We want to complain about the country we live in without fear of reprisal. We want to share pictures of our kids without wondering who else will see them. We want to share with only the people we choose to share with.

When we know a conversation is private, we’re more willing to share ourselves. It feels good to share who we are, to open up to the people we trust. When we don’t know who will hear us, we censor ourselves and hide the rough edges of who we are. But those rough edges help define us. It’s impossible to feel truly loved if you have to hide parts of who you are.

It’s time for us to say, “No, I won’t get over it. Privacy is important, and I won’t give it up.” Today’s software developers need to look at privacy the same way they’ve learned to look at security: it’s not an add-on or a feature that customers have to turn on, it’s something built-in that shouldn’t be turned off.

I hope more companies follow our lead and take the same approach to privacy that Glassboard has. I think the web is headed in the wrong direction, and the more that participate in trying to change that direction, the more likely it is to change.