Blog

Random blog-like rambling from Rachel's brain. A mixed up mess of usability posts, fiction, and travel.

On Privacy and Security, Or What's Up With Facebook?

 Unless you've been living in a cave thus embracing the life of a true Luddite, you've probably heard about Facebook's recent war against privacy. For the past fews months it seems as if every other day America's biggest social networking site has done something new to chip away at the already ill-defined sense of privacy within it's trenches. Here's something of a brief summary, in case you have been in hiding:

Awhile back, Facebook quietly made your status updates indexable by Google. In a bid to be Twitter, this feature surprised the user base that had never really realized that their statuses were potentially going to be public knowledge. Uproar occurs for a few days then dissipates as people quickly forget about caring.

Later, Facebook introduces a way for other websites to consume your data. That's right, Facebook quietly started sharing your interests and information with websites you may not even know about, generally without telling you. 

A couple of days ago, Facebook launched a feature to link all of your interests to actual pages, spamming your wall with new advertising, and sharing your information with those pages as well. In this case there was a opt out option, but it never really explained what the feature was to begin with. Opt out is hardly useful without proper knowledge. 

Learn about some of this at PC World

So what's the deal? Not long before all of this, there was a very public kerfluffle when Google launched it's social sharing system Buzz. Buzz made a relatively grievous mistake at launch, automatically making your whole contact list privy to what you were sharing on Google Reader and whatever else you linked into Buzz. It didn't at that time make it clear what was happening, and the fall out was pretty swift. People freaked out at a level that was somewhat astonishing. Yes, it seemed from this example, that people cared about their privacy. 

Some background on Buzz's launch.

Google and Facebook are hardly the only ones flailing about in this game, although they are the most visible. Netflix slipped into this quaqmire when the company was sued over it's latest recommendation improvement contest for allegedly outing an in-the-closet lesbian mom based on her movie watching history. Money management site, Mint, had problems when it announced that your shopping history, although aggregated and anonymous, might be sold to marketing research firms. Some people canceled their accounts with great speed.

What's interesting to me about much of this is how surprised people are to realize that what they are doing on the web is potentially going to make itself public. I myself have always treated Facebook, despite it's privacy setting options, as data that I would not be devastated about becoming public. That's me though, and I've been playing this game for quite some time. People unfamiliar with the internet, young users and older users in particular, don't really understand what's going on when they post things to the web. New users to Twitter are often surprised to realize that everything they tweet is public knowledge. At the very least people's grasp of exactly who is included in "the public" is pretty slim. Take the young guy who was offered a job at Cisco and promptly lost it when he tweeted: "Cisco just offered me a job! Now I have to weigh the utility of a fatty paycheck against the daily commute to San Jose and hating the work." 

That's right kids, your employer and your mom are both part of the public these days. Watch your step.

A lot of these privacy missteps, both on the part of the companies involved and those kids foolishly posting public photos of themselves shot-gunning beer at parties and then wondering why they didn't get that plush job, is simply a failure to grasp what's going on. These are mistakes, and much of the time, the companies are quick to take corrective action. Google responded to the Buzz complaints within hours. Netflix canceled its contest pretty promptly. There's one company though that seems to flaunt it's issues with privacy, and that's Facebook.

Facebook wields a lot of power. It's the first social networking site that's really collected such a vast and diverse user community that truly actively participates. Each user is a font of delicious information about interests from movies, food, books, tv shows. Each status message is another bit of data telling Facebook where you like to go, what you like to do and who you like to do it with. It is a marketing company dream database, and we are all quietly working to make it more impressive everyday. The truth of it is, Facebook does not really think of us as customers of their site but as unpaid employees entering data in a constant stream. Thus why they are so alarmingly cavalier about how they handle that data. 

The cynical among us have know this for years, but what's sneaky about Facebook is the front it puts up. It somewhat slyly pretends to care about your privacy. There are account settings where you can set the privacy level for any number of aspects of your Facebook posts, but here's the problem: how easy is any of this to set up? When it comes to UI design, I'd be the last person to give Facebook any awards, but far and away the most confusing part of the deeply complex interface is the part that ought to be the most clear: how to ensure that your data is protected. 

There is this concept in usability called an "evil interface". When you've learned enough about design you aren't just capable of delivering designs that are easy to use, you are also capable of designing interfaces to be purposefully obtuse. A naive designer makes mistakes, and evil designer doesn't make mistakes so much as he or she makes your life difficult because they do not want you to accomplish the required task. It is in Facebook's best interest (at least from their perspective) for your data to be public and for them to be able to sell it. Given that, what reason would they have NOT to make the privacy settings confusing. 

EFF has a great article about Facebook's Evil Interfaces, that I highly recommend.

We designers are told from the very beginning that we need to design interfaces that are transparent and easy to use for everyone. I'm often in my work struggling to take a step back in order to try and look at things from the perspective of a technology novice. It's an ethical responsibility as much as it is a skill to do everything we can for our users, especially when it comes to something as delicate as private data. 

So should you quit Facebook? That's up to you, naturally, but you should be an evanglist for your friends on Facebook. Many of them won't really know what the truth of this is, and Facebook certainly isn't going to tell them. It's up to us really, so spread the word. When I publish this post, I will almost assuredly share it on Facebook. 

Addendum to a Restricted Wikipedia

So it turns out there was perhaps some faulty reporting involved surrounding Wikipedia's plans for adding in moderation to their work flow. Check out this article for the full details.

The summary is thus: Wikipedia is not really planning to introduce full moderation. What they are doing is thinking over a couple of ways that they could alleviate some of the problems I discussed in my post. Two approaches are being bandied about as you'll see in the linked article. The first is "flagged protection" which is pretty much the moderation style approach I discussed last time. This is in use already on the German version of Wikipedia.

I still feel hiding changes isn't a great idea, which brings me to approach number two. This one is called "patrolled revisions". This is a lot like what I was recommending, the edits go live immediately so everyone can see them, but the article itself is clearly noted as not vetted. 

What Wikipedia will be doing is using approach number one as a replacement for articles that are currently locked down. So in that sense, things are getting more open. They'll also introduce approach number two on other articles about living people. Aside from those, all other articles will remain the same.

So all the criticism and panic is clearly premature. I'm actually quite ok with the approach as described here.

I guess in summary, you can't believe everything you read on the internet. Even if it is in the New York Times.

 

On Karma, Oh What is it Good For?

I don't believe in karma. At least, I don't believe that people demonstrably get what's coming to them based on their past behavior. Still, that's not what we're here to talk about. We are here to discuss internet karma.

Karma is that elusive number, setting, hidden voodoo that many sites of the Reddit and Digg variety use to elevate certain users above the wild fray. Karma, in theory, encourages users to submit quality content with the hopes that quality will equal higher karma. Higher karma, in turn, can also be used by the site itself to push content submitted by those users up higher than those submitted by newcomers or trolls.

Sounds pretty good doesn't it? Well, many many things have a tendency to sound good in theory and to then fall apart when us irrational human beings actually get our hands on it. Internet karma is no different.

Karma is intended to work as an incentive system, and for a lot of people it certainly does just this. That little number can become an obsession. Getting it higher, getting to be the highest, can turn into a goal that undermines the essential point of a site like Reddit. How so you ask? Well, it's the karma whore issue you see.

karma whore: originally coined at slashdot, a karma whore plays to the prejudices of the masses to get positive moderation on their comments (via urban dictionary).

There are, of course, folks who take that definition to the very extremes, but to small degrees almost every member of an online community is going to end up at least a little susceptible to this phenomenon. The reason is, after awhile posting content that doesn't see a lot of traction and never makes it to the front page, a user is likely to take one of two paths:

1. Leave

2. Start posting content they know the community likes.

So, thusly, the community feeds it's own interests and only those who are willing to play along see their karma increase.

This isn't that different from how we interact with other people offline of course. Like minds hive together, that's human nature, but what if we wanted to see something different happen in cyberspace? What if we wanted to create a community that instead of feeding our existing interests and beliefs expanded and challenged them? Karma, the way I've seen it used today, is an ideology that keeps that from happening.

On Reddit, karma accumulates if the net up votes on your submitted content goes up. Imagine a situation in which instead, the level of controversy on your content resulted in a karma increase. Instead of incentivizing the user to submit content they know will appeal to the beliefs of the community, this encourages the user to submit content that will be polarizing in some way. Net result will be a very different picture of the overall content submitted to the site. Certainly, you can view the controversial items on Reddit, but there's no system that outright encourages users to submit that kind of content.

Maybe we can go the other direction entirely. After all, the best way to firm up your beliefs is to have them challenged. Try this site idea on for size: instead of positive karma, we encourage negative karma. The more down votes you get the higher your score. 

Clearly, there's still a failing in all of these systems. That failing is that it is still always possible to game the system. So what if we abandon the idea entirely, at least as a visible, measurable entity. Hide the karma from users and tweak the algorithm on the back end to get your desired results. Will users still submit content if they aren't 'rewarded' in some fashion? I think so, provided your algorithm still works well enough that interesting, varied content crawls its way to the top. 

So, karma, be it good or bad or controversial, certainly produces interesting dynamics in an online community. I'd love to see it used in a more varied or dare I say, backward fashion.

Keep it real guys, and keep your karma whoring to a minimum.

On a Restricted Wikipedia

A quick note: some things discussed in this article were later shown to be not entirely factual. For an update, see this post: Addendum to On a Restricted Wikipedia. I still feel that the discussion here is worthwhile though, so the rest of this entry remains unedited. Enjoy.

The big news in social media lately, at least from where I'm sitting, is the slow introduction of moderation to the enormously successful Wikipedia. The New York Times reported today that in a matter of weeks users of Wikipedia will be faced with a new barrier to entry, so to speak. Articles about living people will now be protected, and edits to them will have to be approved by a "trusted" editor (still a volunteer, notably).

This is clearly a fundamental change to the original spirit of Wikipedia which up until now has made it's way with self policing as it's primary means for protecting its content. Why the change of heart?

Well, let's look at some other news surrounding our favorite informational site.

1. Composer Maurice Jarre dies at age 84. Newspapers all over the world include with his obituary the quote "When I die there will be a final waltz playing in my head, that only I can hear". A fine, lovely quote that could not have been more perfect for the situation. Of course, it was a fake, added to the man's Wikipedia page by a sociology student. (read about that here)

2. Journalist David Rohde Spent 7 months in captivity after being kidnapped by the Taliban in Afghanistan. An editor at Wikipedia repeatedly tried to update the site with this information only to have it continually pulled down. Turns out, Wikipedia was in cahoots with the NYT to keep Rohde's kidnapping a secret, reportedly in order to increase his chances of survival. (explanation, from the NYT)

3. In 2005 the Wikipedia page for John Seigenthaler, Robert Kennedy's Administrative Assistant in the 1960s, was edited claiming the man was connected to the Kennedy assassination. The offending information was removed at Seigenthaler's request by Wikipedia administration. (in his own words

4. More humorously, in 2006 Stephen Colbert encouraged users of Wikipedia to log on to the site and edit articles on elephants to indicate that their population had tripled in the last six months. Not long after, nearly 20 articles on the site had been accordingly vandalized and had to be locked. Colbert's account was also blocked. (more details

I could probably hunt up various other examples of shenanigans and outright vandalism of more a more sinister kind if I liked, but this probably suffices. It is certainly enough to show why the founders and key players at Wikimedia Foundation would be thinking about moving towards a moderation model. Still, are these good enough reasons to fundamentally change the spirit that has gotten Wikipedia where it is today?

The original NYT article argues that given Wikipedia's significance and ubiquity it is critical that it be carefully moderated to avoid the kinds of issues I listed above. There is a genuine fear here that false information could quickly be spread with no oversight. Those obituaries quoting Jarre from Wikipedia certainly did not bother to do any significant source checking resulting in misinformation on Wikipedia suddenly being backed up by seemingly more powerful sources.

That's relatively compelling at first glance, but should Wikipedia necessarily be picking up the slack for decidedly lazy reporting? I don't think so, and to play devil's advocate, I think a restricted, moderated Wikipedia is flawed in some significant ways.

Firstly, Wikipedia was initially created as an experiment, if you will, to see what would result if you created a free encyclopedia run by volunteers that anyone, literally anyone, could edit. The result? The most popular source of information on the internet today. Whenever you google for something, the first results are nearly always from Wikipedia. That's a testament to the power of that initial experiment. In a way, it proves it worked.

Would moderating content change that? Yes, in some ways it will. Now the power to actually update the information will lie in the hands of an elite group of editors, specially selected. That creates a barrier to entry for some folks. It also means that all the power lies in these people's hands. Not to jump to conclusions, but imagine what this would mean if that group of editors had a particular political bend. The information making it onto the site may well turn out to be biased.

Is that a reasonable risk? What about all that potentially false information that gets out there when there isn't any moderation? Well, I for one want to lean towards the side of openness. What we forget about in the stories above is that they were eventually revealed, reported on and corrected. Potentially there are examples that were not, but what's wonderful about every Wikipedia article is that there is this often overlooked tab: "discussion". Here there's a running conversation on why certain edits were made and debates about whether something should be changed. It can get down to pointless minutia, but what's wonderful about it is we all have access to see those discussions. Add in a moderation level, and we have no idea what changes were proposed, which were rejected, and why.

There is actually another, mid-point solution. You can have openness + moderation, and I think this is a good direction to go. Envision this: a user submits an edit on an article. As soon as they click save, it's viewable to the whole Wikipedia viewing public, with one difference, it's visually denoted as being a pending, unverified change. As soon as the moderators have a chance - they can clear it for permanent inclusion, or reject it. Rejected edits should be saved and viewable and should always be shown with the reason for the rejection.

I fully understand the reason for wanting moderation. It's a natural progression for any online community as it grows to critical mass. Still, the openness of Wikipedia has always been a fundamental part of it's ethos and power. You don't have to give that up necessarily. In fact, you shouldn't have to give it up at all.

Read the Original NYT Article Here

On Social Networking Demographics

I've spent more time than is potentially healthy thinking about Twitter recently. This is partially due to conversations at the office about the popular micro-blogging service, but it's ever-presence in the news over the last several months is keeping it on my mind as well. Last time I waxed a bit poetic about how Twitter has become ingrained in our communication system today. This time, it's going to be all about demographics and statistics.

Huh? Here's the thing. Twitter has a really interesting demographic makeup. A demographic makeup that has me really curious about a few things.

My interest was piqued when I saw a set of two fine articles put together by Peter Corbett at iStrategyLabs. The first was a summary of the demographic breakdown of Twitter.  Dig around in the numbers and at least one really interesting thing pops out:

More than half of Twitter users are over the age of 35

Why, you ask, is that so interesting? Well, traditionally speaking, social networking services are the playground of the young. Folks under the age of 35 have, in the past, been the early adopters of this kind of technology and have continued to make up the bulk of the user groups. Thus why marketing attempts on Facebook or My Space are geared to a younger audience.  So - that has me thinking, why it Twitter different?

Ok, first I'll bet you want proof of some kind that it actually is different. That brings me to the second article, which helpfully provides a breakdown of Facebook demographics. We can see here that around 70% of Facebook users are under the age of 35. My Space, although a service that is falling out of favor, skews even younger with over 30% of its users clocked in at under 18 and around 75% total under the age of 35 (that data was pulled from this presentation). So - Twitter is something of an unusual animal in that landscape.

Some folks have speculated that this is a result of the apparent inbred narcissism of the younger generation. Those much-maligned Millennials and member of Generation Y (my own horribly named generation) "realize that no one is viewing their profile, so their tweets are pointless" (read more about a 15 year old analyst who made that pronouncement). I alluded to that a bit last time, that Twitter is a communication device equivalent to shouting into the void. Do younger people find such shouting dreary since the likelihood of anyone shouting back is so small?

The preference of the young does appear to be with services that provide a more immediate feedback mechanism. You post a status on Facebook and your friends comment on it. You link to something interesting, they comment on it. Wait though, it isn't as if this isn't possible with Twitter. I can post an update and my friends or followers respond with a friendly @rknickme. It isn't so different is it?

But it is actually. See, there is so much more you can comment on in Facebook. Twitter on the other hand feels, to someone used to that flexibility, like a status update mechanism without much else going on. Heck, you can't throw a sheep at someone on Twitter can you? (Personal aside: I detest Superpoke).

So, that's my pet theory. Young users want a wealth of features at their fingertips. Twitter by its nature isn't any more or less narcissistic than Facebook or My Space. All the services are ways of broadcasting yourself and awaiting feedback and justification from the masses. What's really different is this:

1. Number of Features

2. Customization

3. Simplicity or lack thereof.

Twitter could not be more simple to get the hang of. Make an account, post a tweet. The tweets can only be 140 characters, so even that initial barrier of looking for something of substance to say is pulled down quite a bit. It is uncluttered with applications and add ons. There isn't (or wasn't until recently) much confusion surrounding privacy settings. Facebook, in comparison, is a landmine of confusion with an interface to match. That, I propose, is why Twitter has become the playground for over 35s.

What's a bit magical about that is it leads to another interesting statistic from those Facebook demographics I mentioned. The fastest growing group on Facebook? 35 - 54 year olds with the over 54s a tight second. Why the influx? Maybe it is as simple as this completely imaginary story:

A 40 year old woman is surfing the web looking for information about why her Comcast internet isn't working. She stumbles serendipitously onto ComcastCares at Twitter. She decides to try this new fangled thing out, and creates a Twitter account in order to ask the kindly people running that service what the deal might be. Now she's in. Before you know it she's sending tweets out multiple times a day, and she's following a slew of celebrities along with her family and friends.

Later, a well loved nephew or niece tells this woman about a service she's only vaguely familiar with. It's called Facebook, and the kids say you can do even more with it than the magical Twitter. They say she'll be able to add applications like Visual Bookshelf so her friends will know what she's reading. They say she can connect to her Netflix queue and everyone can see what she's watching. A couple of months ago all of that might have sounded complicated and overwhelming, but our friend has been twittering for awhile. She knows the basics of how this stuff works. The barrier to entry has been effectively lowered.

Maybe that's part of the picture. Surely, there are a number of intersecting reasons for all those juicy statistics, but the view that it can all be summed up as an effect of those narcissistic 20-something is extremely limited. The landscape of social networking on today's internet is a grand and multifaceted picture.

Of course, if I'm partly right, and the growth on Facebook may in some way be attributed to the already strong numbers of over 35s on Twitter, what does that mean for the future of Twitter? Will we start to see usage slip if people migrate from one to the other?

I doubt it, but I do think we will see an evolution in the way Twitter is used. We're already seeing an increase in marketing and customer service use on the site, and I'm confident that will continue. Could Twitter eventually be overrun by celebrities and companies shelling their personal lives and goods for our consumption while we, the unwashed masses, hide away in Facebook to share when we're going grocery shopping or are attending some rad rock show?

It's hard to predict, but in another year, the internet, Twitter and Facebook will be a completely different animal than the one we see today.