Random blog-like rambling from Rachel's brain. A mixed up mess of usability posts, fiction, and travel.

On Google Wave's Complexity and Usability

Over the summer, Google released a rather astonishing video of Google Wave in action. All across the internet enthusiasm was at an incredible high. Google Wave was going to be a paradigm shift of a collaboration suite. It was going to surpass email as the way people communicate with each other. I work a bit with collaboration software, mostly Sharepoint, so I was particularly interested in where this might go. Thus, it was with excitement that I acquired an invite to Wave.

Now, I've been scooting around this interface for about a week attempting to figure things out and I've come to a few conclusions.

The first: Google Wave won't replace email

Why not? It's far to complicated. What makes email a powerful medium is in actuality it's pure simplicity. The learning curve for email is almost trivially small. Wave is chock full of features, but in a sense they are features that are simply more than what the average person requires and that is a barrier to entry. I want to stress though that this is not so much a criticism as it is merely an observation. Wave probably isn't seeking to solve the email problem.

The second: chat is necessary for real time collaboration

I hadn't realized this before, but in the process of attempting to plan a trip to Greece with a friend using Wave as our platform I found the lack of a true chat interface unbearably frustrating. While you can "ping" a person in Wave, that ping merely acts as a mini wave. It's fully featured, which is absolutely overkill for a quick chat, and is also saved as a separate wave from the one you are currently working on, meaning the chat information and any decisions made there are separated from the rest of your collaboration work. 

I think for Wave to really take off, it needs to have a true chat feature, one that rather than being a wave is truly optimized for chat. For me, the lack of this was so frustrating I had to supplement my work in Wave with chatting over Adium. 

The third: There really needs to be connectivity between Wave and other Google Apps.

You're welcome to attach documents to your waves, but there doesn't appear to be a way to link in Google's already relatively successful collaboration suite. Prior to using wave for our "Plan a Trip to Greece" project, my friend Shane and I had both a Google doc of information and a Google Map of places we wanted to visit. My enthusiasm for Wave was damped when it became clear I could not capitalize on this existing work within the application. I could drop in a map gadget, but I'd have had to build my whole map again from scratch.

It felt extremely limiting to not be able to bring in work completed in other places, especially considering that those other places are ... well ... Google.

The fourth: this interface is wicked confusing

I have general faith that this will improve with time, but there are a lot of little things about the Wave interface that make it frustrating to use. I won't detail all of them here, but here's a brief sampling:

1. Why the funky scroll bars? They're a little clunky and it confuses me a bit that they didn't just use standard scroll bars which work perfectly well.

2. In a long wave, how can I jump to the unread changes? Right now I can't find a way to do this. If the changes are spread throughout the wave, it's extremely difficult to move through it to find the relevant changes.

3. Nested replies get quickly out of hand: one thing, having to double click to even find these features is messy. Another, those nested replies seem to not always show up where you expect it to. As they grow, it makes following them a bit difficult. I think they perhaps would have worked better showing up in the style of notes in Microsoft office revision mode.


Wave might yet prove to be a paradigm shifting project. Niggling usability issues are a part of any release like that, so it doesn't concern me overly much. However, the lack of Google Docs and Maps integration surprises me and makes me wonder how long we'll have to wait to see what I view as very necessary new features.

One other thing that came to mind as I was playing with this... will the masses, who aren't perhaps looking for a robust collaboration solution, find themselves driven to use Wave at all? I have my doubts. Most of us in our day to day lives don't require that much complexity and jumping right into it is an overwhelming experience.

On the other side of the fence we have gmail kind of quietly doing its thing. More and more it seems to me that it's gmail that could really take off as a collaboration platform that everyone can start using. Already it has integrated chat with the email client, and it's starting to build in a connection with Google Docs as well. It's a way of easing users into a more robust experience by trickling the features in over time. Not a bad approach, although maybe not a paradigm shifting one.

Some references and other opinions on Wave:

1. IT Pro agrees, Wave won't replace email

2. Mashable's opinion is generally positive

3. Louis Gray thinks Wave is way way too noisy

4. Some thoughts at CNET


Addendum to a Restricted Wikipedia

So it turns out there was perhaps some faulty reporting involved surrounding Wikipedia's plans for adding in moderation to their work flow. Check out this article for the full details.

The summary is thus: Wikipedia is not really planning to introduce full moderation. What they are doing is thinking over a couple of ways that they could alleviate some of the problems I discussed in my post. Two approaches are being bandied about as you'll see in the linked article. The first is "flagged protection" which is pretty much the moderation style approach I discussed last time. This is in use already on the German version of Wikipedia.

I still feel hiding changes isn't a great idea, which brings me to approach number two. This one is called "patrolled revisions". This is a lot like what I was recommending, the edits go live immediately so everyone can see them, but the article itself is clearly noted as not vetted. 

What Wikipedia will be doing is using approach number one as a replacement for articles that are currently locked down. So in that sense, things are getting more open. They'll also introduce approach number two on other articles about living people. Aside from those, all other articles will remain the same.

So all the criticism and panic is clearly premature. I'm actually quite ok with the approach as described here.

I guess in summary, you can't believe everything you read on the internet. Even if it is in the New York Times.


On Karma, Oh What is it Good For?

I don't believe in karma. At least, I don't believe that people demonstrably get what's coming to them based on their past behavior. Still, that's not what we're here to talk about. We are here to discuss internet karma.

Karma is that elusive number, setting, hidden voodoo that many sites of the Reddit and Digg variety use to elevate certain users above the wild fray. Karma, in theory, encourages users to submit quality content with the hopes that quality will equal higher karma. Higher karma, in turn, can also be used by the site itself to push content submitted by those users up higher than those submitted by newcomers or trolls.

Sounds pretty good doesn't it? Well, many many things have a tendency to sound good in theory and to then fall apart when us irrational human beings actually get our hands on it. Internet karma is no different.

Karma is intended to work as an incentive system, and for a lot of people it certainly does just this. That little number can become an obsession. Getting it higher, getting to be the highest, can turn into a goal that undermines the essential point of a site like Reddit. How so you ask? Well, it's the karma whore issue you see.

karma whore: originally coined at slashdot, a karma whore plays to the prejudices of the masses to get positive moderation on their comments (via urban dictionary).

There are, of course, folks who take that definition to the very extremes, but to small degrees almost every member of an online community is going to end up at least a little susceptible to this phenomenon. The reason is, after awhile posting content that doesn't see a lot of traction and never makes it to the front page, a user is likely to take one of two paths:

1. Leave

2. Start posting content they know the community likes.

So, thusly, the community feeds it's own interests and only those who are willing to play along see their karma increase.

This isn't that different from how we interact with other people offline of course. Like minds hive together, that's human nature, but what if we wanted to see something different happen in cyberspace? What if we wanted to create a community that instead of feeding our existing interests and beliefs expanded and challenged them? Karma, the way I've seen it used today, is an ideology that keeps that from happening.

On Reddit, karma accumulates if the net up votes on your submitted content goes up. Imagine a situation in which instead, the level of controversy on your content resulted in a karma increase. Instead of incentivizing the user to submit content they know will appeal to the beliefs of the community, this encourages the user to submit content that will be polarizing in some way. Net result will be a very different picture of the overall content submitted to the site. Certainly, you can view the controversial items on Reddit, but there's no system that outright encourages users to submit that kind of content.

Maybe we can go the other direction entirely. After all, the best way to firm up your beliefs is to have them challenged. Try this site idea on for size: instead of positive karma, we encourage negative karma. The more down votes you get the higher your score. 

Clearly, there's still a failing in all of these systems. That failing is that it is still always possible to game the system. So what if we abandon the idea entirely, at least as a visible, measurable entity. Hide the karma from users and tweak the algorithm on the back end to get your desired results. Will users still submit content if they aren't 'rewarded' in some fashion? I think so, provided your algorithm still works well enough that interesting, varied content crawls its way to the top. 

So, karma, be it good or bad or controversial, certainly produces interesting dynamics in an online community. I'd love to see it used in a more varied or dare I say, backward fashion.

Keep it real guys, and keep your karma whoring to a minimum.

On a Restricted Wikipedia

A quick note: some things discussed in this article were later shown to be not entirely factual. For an update, see this post: Addendum to On a Restricted Wikipedia. I still feel that the discussion here is worthwhile though, so the rest of this entry remains unedited. Enjoy.

The big news in social media lately, at least from where I'm sitting, is the slow introduction of moderation to the enormously successful Wikipedia. The New York Times reported today that in a matter of weeks users of Wikipedia will be faced with a new barrier to entry, so to speak. Articles about living people will now be protected, and edits to them will have to be approved by a "trusted" editor (still a volunteer, notably).

This is clearly a fundamental change to the original spirit of Wikipedia which up until now has made it's way with self policing as it's primary means for protecting its content. Why the change of heart?

Well, let's look at some other news surrounding our favorite informational site.

1. Composer Maurice Jarre dies at age 84. Newspapers all over the world include with his obituary the quote "When I die there will be a final waltz playing in my head, that only I can hear". A fine, lovely quote that could not have been more perfect for the situation. Of course, it was a fake, added to the man's Wikipedia page by a sociology student. (read about that here)

2. Journalist David Rohde Spent 7 months in captivity after being kidnapped by the Taliban in Afghanistan. An editor at Wikipedia repeatedly tried to update the site with this information only to have it continually pulled down. Turns out, Wikipedia was in cahoots with the NYT to keep Rohde's kidnapping a secret, reportedly in order to increase his chances of survival. (explanation, from the NYT)

3. In 2005 the Wikipedia page for John Seigenthaler, Robert Kennedy's Administrative Assistant in the 1960s, was edited claiming the man was connected to the Kennedy assassination. The offending information was removed at Seigenthaler's request by Wikipedia administration. (in his own words

4. More humorously, in 2006 Stephen Colbert encouraged users of Wikipedia to log on to the site and edit articles on elephants to indicate that their population had tripled in the last six months. Not long after, nearly 20 articles on the site had been accordingly vandalized and had to be locked. Colbert's account was also blocked. (more details

I could probably hunt up various other examples of shenanigans and outright vandalism of more a more sinister kind if I liked, but this probably suffices. It is certainly enough to show why the founders and key players at Wikimedia Foundation would be thinking about moving towards a moderation model. Still, are these good enough reasons to fundamentally change the spirit that has gotten Wikipedia where it is today?

The original NYT article argues that given Wikipedia's significance and ubiquity it is critical that it be carefully moderated to avoid the kinds of issues I listed above. There is a genuine fear here that false information could quickly be spread with no oversight. Those obituaries quoting Jarre from Wikipedia certainly did not bother to do any significant source checking resulting in misinformation on Wikipedia suddenly being backed up by seemingly more powerful sources.

That's relatively compelling at first glance, but should Wikipedia necessarily be picking up the slack for decidedly lazy reporting? I don't think so, and to play devil's advocate, I think a restricted, moderated Wikipedia is flawed in some significant ways.

Firstly, Wikipedia was initially created as an experiment, if you will, to see what would result if you created a free encyclopedia run by volunteers that anyone, literally anyone, could edit. The result? The most popular source of information on the internet today. Whenever you google for something, the first results are nearly always from Wikipedia. That's a testament to the power of that initial experiment. In a way, it proves it worked.

Would moderating content change that? Yes, in some ways it will. Now the power to actually update the information will lie in the hands of an elite group of editors, specially selected. That creates a barrier to entry for some folks. It also means that all the power lies in these people's hands. Not to jump to conclusions, but imagine what this would mean if that group of editors had a particular political bend. The information making it onto the site may well turn out to be biased.

Is that a reasonable risk? What about all that potentially false information that gets out there when there isn't any moderation? Well, I for one want to lean towards the side of openness. What we forget about in the stories above is that they were eventually revealed, reported on and corrected. Potentially there are examples that were not, but what's wonderful about every Wikipedia article is that there is this often overlooked tab: "discussion". Here there's a running conversation on why certain edits were made and debates about whether something should be changed. It can get down to pointless minutia, but what's wonderful about it is we all have access to see those discussions. Add in a moderation level, and we have no idea what changes were proposed, which were rejected, and why.

There is actually another, mid-point solution. You can have openness + moderation, and I think this is a good direction to go. Envision this: a user submits an edit on an article. As soon as they click save, it's viewable to the whole Wikipedia viewing public, with one difference, it's visually denoted as being a pending, unverified change. As soon as the moderators have a chance - they can clear it for permanent inclusion, or reject it. Rejected edits should be saved and viewable and should always be shown with the reason for the rejection.

I fully understand the reason for wanting moderation. It's a natural progression for any online community as it grows to critical mass. Still, the openness of Wikipedia has always been a fundamental part of it's ethos and power. You don't have to give that up necessarily. In fact, you shouldn't have to give it up at all.

Read the Original NYT Article Here