Monday, January 31, 2011

Global Game Jam 2011

Last weekend was made of small amount of sleep and lots of game development. In about thirty hours we designed and implemented a game, Hamsters and Plague. It's a tile-based puzzle game about the survival of hamsters after a plague. You can find information on the game from our project page. I have also started a blog for future development and other related discussion.

Making a game in such a short time is awesome in many ways. Especially learning. I will try to make a proper post mortem on our project when I'm less tired (later this week) and post it to the Hamsters and Plague blog. Naturally it's also important for my research to have actual game development experience. I once again realized how hard it is to evaluate learnability of something made by yourself and our game definitely could do a better job. Which I hope it will do in the future if I have the time to develop it further.

I'm also thinking that something like this would be a really good way to teach programming and team development. A rather tight time limit does wonders to motivation, at least when the topic is one that interests everyone. While school environment doesn't support over-the-weekend courses very well, something similar should be possible. Maybe we could give students credit points for participating in next year's Global Game Jam.

Wednesday, January 19, 2011

Digging into References

For those among my readers (assuming I have any) who are interested enough in this field to read scientific articles, I'm throwing you a bone. Several bones actually. I've been recently digging into research related to my own as I'm preparing to write my first article and an initial literature review for my thesis.

Aesthetic interaction - a pragmatist's aesthetics of interactive systems (Marianne Graves Petersen, Ole Sejer Iversen, Peter Gall Krogh; 2004). This paper discusses different approaches to aesthetics in the design of interactive systems, analytic aesthetics and pragmatist aesthetics. Furthermore, the authors discuss aesthetic interaction, which is in many ways similar to what I call playful interaction. The authors introduce aesthetics as the fifth element of interaction design (the four others being system, tool, dialogue and media). Artistic interaction goes beyond so called added value.

Ambiguity as a Resource for Design (William Gaver, Jacob Beaver, Steve Benford; 2003). The authors of this article question the HCI convention of designing for one correct interpretation of a system. Instead, they suggest, ambiguity can be used in various ways to enhance user experience. The article points out that if users are left to figure out a system for themselves, they will be more affectionate towards it, and are more likely to accept it as is, and find surprising uses for it. They present three example systems that use ambiguity and three types of ambiguity: of information, of context and of relationship.

Designing Interaction, not Interfaces (Michel Beaudouin-Lafon, 2004). In this article, the author suggests a paradigm shift from designing interfaces to designing interaction. Interaction paradigms and interactions models are introduced. The article delves into the computer-as-tool paradigm. Interaction models are frameworks for guiding designers. Interaction design, in contrast to interface design, means considering the how of interaction more deeply than simply constructing interfaces that are easy to understand and efficient. For example, considering how is tool selection done instead of designing an efficient toolbar.

Heuristics for Designing Enjoyable User Interfaces: Lessons from Computer Games (Thomas W Malone, 1981). The first paper I was able to find that suggests taking influences from computer games in designing user interfaces. It suggests heuristics in three categories: challenge, fantasy and curiosity. These same heuristics have been presented first for instructional activities (also by Malone). Heuristics included under challenge are goal and uncertain outcome. Under fantasy there are emotional appeal and metaphors. Finally under curiosity there are the concepts of optimal level of information complexity and "well-formed" knowledge structures. Overall this paper is a good starting point.

Making by Making Strange: Defamiliarization and the Design of Domestic Technologies (Genevieve Bell, Mark Blythe, Phoebe Sengers; 2005). The authors of this article criticize how we can only improve upon current design unless we defamiliarize ourselves from the subject. To stress their point, the authors present examples of three studies that look at homes and domestic life outside our (western) field of familiarity. They present twelve statements to defamiliarize certain standard HCI design goals. I couldn't agree more - compare this article to some of my earlier blog posts and you'll see what I mean.

Staying Open to Interpretation: Engaging Multiple Meanings in Design and Evaluation (Phoebe Sengers, Bill Gaver; 2006). This article questions one of the core HCI principles: single authoritative interpretation. Already stated in The Design of Everyday Things (Don Norman), the goal is to make the designer's model understandable to the user. The authors here present six different strategies to make designs that are open to interpretation. Examples for each strategy are provided. This paper continues along the same lines as the ambiguity paper above.

So there, some of the papers that will most likely influence my research.

Wednesday, January 12, 2011

Context-Awareness, Data Collection and Privacy

One huge issue in ubiquitous computing is privacy. Envisioned ubicomp applications are context-aware, i.e. they infer what the use context is from collected data. Smarteverythings know what you are doing and can offer services just when you need them. This is all cool but has some serious potential problems. Namely, how do these smart things know the context? They collect data. A lot of web applications are already doing this, as are loyalty systems in grocery store chains. It's actually quite frightening how much people are willing to share.

There are two issues with data collection: misuse and theft. Misuse encompasses cases where data is used for purposes it was not given for. Theft encompasses cases where data is stolen by a third party. Legal agreements and software security are the means deployed against these issues. Guidelines exist for ethic use of data. However, the consequences of agreed upon use of data can also be highly unpredictable. These definitely do not get advertised.

The point is that ubicomp will require us to give up more and more data so that context-aware applications can make our lives better. I will now proceed to argue against heavy data collection for a bit and provide my ideas of how to achieve context-awareness without massive amounts of sensor (etc.) data.

The focus of interactive spaces has been outlined in this blog before, but I'll summarize it briefly so you can all make guesses of where this argument is going. In our work, we seek to create environments that advertise services to users but ultimate selection and use of services is left to their own judgement. We emphasize intelligent user interaction in lieu of system intelligence. While I do believe some applications need to infer context from data using artificial intelligence, I feel it necessary to point out that more often than not it should be quite enough to simply make it known that a service is available. The user is in charge - our job is to make decision making and interaction effortless.

My view is that a lot of context-awareness problems could be also solved by using highly modular applications. Like in the word processor example earlier the user's actions indicate the context. If the user launches a particular application component, this action alone can tell the system a lot without knowing anything about the user. Take one example, mobile applications launched by touching RFID tags in an interactive space. Without identifying the user in any way, a lot can be said of their location and intention nevertheless simply by the fact that they touched the tag.

This does not necessarily lead to simple applications. With a proper framework, switching between different application components (in this example, touching another tag) should be made effortless. To achieve this, one important aspect is ensuring compatibility between applications. Simple example: I can pick up a magazine from an RFID tag to my mobile phone. When I'm taking a coffee break, I can touch a tag on the table to send the magazine into the table's built-in display for reading (also capable of displaying the book I'm currently reading). The applications are simple, but the system can easily expand. Most importantly, at no point is there any need to identify me as the user, or submit any data about me into the system. Unless my phone is stolen, no one knows what magazines I read.

Of course my example is quite ideal. Free magazine, so no payment issues (which are always more complicated). In many applications there will be need to identify users. But the point is to always consider if using data could be avoided by intelligent system design. My other, ongoing, point is that automatic does not equal better. Certainly the coffee table in my example could have recognized me as the sitter and immediately present magazines based on my preferences. Personally I consider this kind of creepy.

Technology should provide us with options. We should be able to use those options as anonymously as possible. Just Sayin'

Tuesday, January 4, 2011

Interaction Conventions - A Local Maximum?

The desktop metaphor was introduced in the Xerox Star in 1981. This year it will be 30 years old. The desktop metaphor lives on. Of course it has evolved during these 30 years, but how much exactly? Not having used the Star, I'm still bound to guess "not much". Following conventions is one fundamental usability principle but if we optimize usability by following the same conventions, effectively iterating over same things again and again, are we bound to reach a local maximum instead of global (in terms of user experience)?

With desktop software, usability is largely operating with WIMP (Windows, Icons, Menus and Pointers). Generally graphical user interface toolkits always share certain conventions. Following conventions in itself is not a bad thing - after all, familiarity acts as great leverage for learning new things. However, it might become troublesome if the conventions themselves become outdated by new technology. WIMP is not designed to deal with tangible interfaces, voice input or gestures.

This has been noted by HCI researches in at least several papers, most likely more, using different names for things. One fundamental limitation in our current applications is that they are designed for exactly one pointer. Multitouch screens on recent smartphones and PCs are slowly paving the way for multiple pointers, allowing a user to touch several points at once. Tangibles, researched since mid 90's, will move beyond touch screens, allowing virtual objects to be controlled by multiple physical objects. Which is faster: dragging objects on the screen with a mouse or moving multiple physical objects on a flat surface?

Technologies for tangible input have been here for some time. Same can be said of voice input. Gestures are making their way, receiving a huge boost from Microsoft Kinect. Combining these technologies with the traditional keyboard (still king for typing) should result in better interaction overall. I'm not ready to kill the mouse either, it is still pretty good for pointing. Sometimes touch screen can replace it, but I don't think there are too many as effective instruments for certain tasks (especially certain games).

So, point? I don't think that our current conventions are able to adapt to these new technologies. Trying to fit these new possibilities within known conventions is more likely to hindrance improvements in overall user experience. Learning new things cannot be avoided forever. Of course in creating new interaction models it is our responsibility to make them easy to learn. Conventions from the physical world should be useful for us.

Changing the desktop world is probably too late though. That is why I've set my sights to ubicomp, where conventions don't really exist yet. It should also be easier for people to accept learning new ways of doing new things, than it is replacing their comfortable old ways of doing things with new ways. Tear down the wall, let creativity reign!