Social media is for the expert only and tags are part of the why

Posted on: September 05, 2011 by: Vesa Metsätähti in:

Tags. Metadata, keyword, used to describe information. Usually informal and non-hierachial. I guess lack of structure and occasional inline use defines them from other type of keywording.

Some of their functions (providing context, making finding easier, some classification, etc.) would be cleared with good headlines for both: man and machine.

Much of the social media the tags are used with is almost size of headlines themselves. Putting a headline on a tweet would not a be good idea.

Tags would need structure to make (social) media more casual

For disposable media informality of tags might not be such a big problem. You tag tweet to get it connected to something now. Not to build a library or archive.

Even with stuff that does not matter the day after tomorrow it would be good to know your tags a bit better. For some one to understand what currently used tags are about there needs to be a description of some sort. Meaning of plain language tags can be veiled too (as symbols do not always carry their literal value).

The beginner, intermediate, expert -model that is described in About Face for user instructions suggests that beginner and and expert segments have high turnover rate. No one plans to be prepertual beginner. You rather quit doing something or get a hang of it pretty fast.

Expert drops pretty fast to intermediate too when taking breaks from the expert usage.

So the tags… if they are described only by them selves and sometimes very crypic content you need to be an expert of the media and subject to understand what they are. In the rapidly changing domain of social media you need daily effort to know what is going on or where to find something.

And since much of the processing of social media is tag realted (or at least tries to be tag related) the new users will have hard time jumping to the social media band wagon.

Example of Big Brother’s big move in –night

I decided to see how is Twitter used during mandatory Big Brother season of 2011 in Finland.

Now Big Brother appears with the very same all over the world. Making a BB search in Twitter would not be of any good.

The official site did not announce any tag, neno was presented in the opening ceremony. For some reason people I know in other social media did not talk about it much so I found no tags there.

After few searches I found official handle for the BB season in Twitter and in their description was also the tag (#bbsuomi).

Two tweets of Big Brother opening night from the only one tweeting about it on the official tag.

There was one lonely person who valiantly commented what was going on during the opening night’s TV-broadcast. “Is twitter this small in Finland or jst in this demography?”, I wondered. Later it turned out that I should have magically known that the correct tag to follow would have been quite different (#bbstudio).

I have to admit that I’m not an expert of Big Brother: word of mouth on the subject does not get to me. Twitter is not my piece of cacke either, so I’m a bit unaware of what are the standards of choosing a tag this week.

How would structure help someone to get started

Because the social media should be social that info would be good to introduce in the context if there is one. Revolution in a vountry does not have a context in a same way as a TV show.

But lets say you want to know if people are talking about some of Big Brother contestors right now. (Bored at school, yes?). How to get on with it via tags?

Even if tags eould not have hierarchy there could be a “wikipedia of tags”, explaining that there is Big Brother season 7 running in finland, the stuff is discussed via tags #bbsuomi (which is the official feed and seems to be used to tag random stuff found from media) and #bbstudio (which is the feed for fans to discuss about latest events).

Tagging could go on as usual, people just create them, but in case of bigger event some one could connect them to a broader subject and explain the differences.

Having the info to refer to some neutral 3rd party content would also prevent building an internet inside internet. Instead of letting tag only events, things that have presence in a service (and the really important will not have – like revolutions) you would have stuff to refer for example wikipedia. Or why not include that info to wikipedia and have it presented via api in verious services?

Minute is a long time to sit in traffic lights if you drive only 20 sec to next lights

Posted on: August 19, 2011 by: Vesa Metsätähti in:

Motorists in Line at the Safety Lane at an Auto Emission Inspection Station in Downtown Cincinnati, Ohio...09/1975

You how people are always scolded on how they dislike waiting in traffic lights? Well, it is not they, but we. And you notice this usually while driving a car but can experience anxiety also when moving on foot.

“Irrational, only a minute away from your day”, they say (they who are not currently behind the steering wheel). Well, perhaps not only a minute, but still after speeding through couple of lights you notice how you linger on tasks wasting time without worries.

People are not bad, not that irrational — it is just that rationality is very limited by context

Comparing extra 5 minutes delay that the traffic lights caused to the extra 10 minutes you spent on Facebook makes anger for red light to look dumb indeed.

But the delay should be compared to the context: you might spend 1/3 of travel time actually driving have 2/3 spent on delays caused by red lights.

Lesson: do not judge people irrational or dumb when doing design or customer service

If you think someone’s reaction in a situation stems from the person being irrational or just plain evil you should think again. Yes, you can encounter someone incredibly dumb, but changes are that you did not understand the context and emotions it triggered.

And if you do not understand the context and motives your service or design will be impaired.

***

Extra: UK town shuts down traffic lights with surprising results

In this video you can see how shutting traffic lights eased traffic in an UK town. I don’t think that it would work this well everywhere, but I’m surprised how little in this economy and climate talk wasted time and Co2 emissions caused on standing in lights are exploited as arguments.

Goal grid and How do you measure success with private parking control?

Posted on: February 19, 2011 by: Vesa Metsätähti in:

Parking garage at work is changing it’s rules during early 2011. The core problem seems to be that some workers use the parking garage for long term parking. Workers, yes, since access to the garage is only and for all with access key.

My guess is that the goal is to offer more available parking spaces.

Solution? Require parking permit visible on the vehicle (every worker can apply one) and max one week of continuous parking is allowed (when on business trip etc.). Suggested use for the garage is only for short term parking.

A private parking control company is authorized to enforce the rules.

More effective parking as design assignment:

If solving the problem of “I’ll store my car here till summer” would be a design brief I would first try to make sense of the brief via design strategy and making a hypothesis of the goals.

So, focus, definition, value, scope…

The parking garege offers parking space during work and shelters company vehicles. Get rid of long term parking to offer more every day parking opportunies for workers. Do it without building anything new or alterling the layout of the parking garage. Making parking better should take place in Q01 of 2011

Goals…

Company: Get rid of the long term pakrking? Get rid of the complaints? Make maintenance easier (cleaning might be pain if there are cars that you can’t get rid of)
Employee: park car safely without hassle or fear of damage and hidde costs, to get to work faster.

To measure success…

How many minutes emplyees use looking for free parking spot daily, between 7-8, 8-9, 9-10
How many hours twice a year maintenance takes.

Goal grid and initial impression of suggested solution:

To do this lets imagine a goal for private parking company that gets it’s income from parking tickets it writes:

collect as much money as possible by ticketing cars in assigned areas.

The goal grid is a way of doing cost/benefit evaluation comparing solutions to goals.


  Company
Make maintenance easier, offer more space
Employee
Care free parking
Parking control
Make money
Ticket (increasing amount of) cars Cost if behavior does not change Cost if you need to be afraid of tickets → adds stress to parking Benefit if behavior does not change



So, at least company’s and parking control’s goals contradict. If people give up their bad ways and there are less troublesome cars parking control loose money. And if the aim is to make parking easier for the employee there is an additional stress factor added even if you get rid of those five car carcasses.

There is no word yet what are the rules on what the parking control can ticket. From employee’s point of view more troublesome than the (unknown amount of) car casses are the cars hogging two parking spaces.

Ticketing parking space hoggers would not necessary lead to more free space either. The ticket does not move the car.

Let’s add another solution of someone calling to the owners of badly parked cars.


  Company
Make maintenance easier, offer more space
Employee
Care free parking
Parking control
Make money
Ticket (increasing amount of) cars Cost if behavior does not change Cost if behavior does not change AND you need to be afraid of tickets → adds stress to parking Benefit if behavior does not change
Call the owners (and tell them move their cars) Benefit if you can get hold of them and they move their cars Cost if badly parked cars are not re-parked instantly Cost if behavior changes



Cost benefit ratio did not change, but at least there are no contradicting conditions. And that is what the goal grid is about. You do not get too far with numbers of costs and benefits but the conditions are valuable. Conditions can be formed in to requirements that explain the goal.

Side note on private parking control

First you should find out why people are parking badly. Is it because they just do not care? Then be my guest and get someone to ticket them away.

But if people put there cars to where there are no real parking spaces etc. the reason might be also that there is not enough parking space and ticketing does not solve it.

The private parking companies usually claim that they keep the emergency routes clear and tidy up the parking area. No they do not. Ticket does not clear up emergency route. The car needs to be towed away. They do act as deterrent but that effects all who are perking their cars as their procedures are unclear and they are keen on interpreting parking so that they can ticket as much as possible.

Private parking control should work as an enabler instead of punisher. And I guess that is called valett parking.

Content lives on multiple layers

Posted on: September 20, 2010 by: Vesa Metsätähti in:

Diagram summarizing elements of user experience In Content Strategy Kristina Halvorson chastises Jesse James Garret on diagram found in The Elements of User Experience. Content requires appear on 2nd phase (that should not be called phases and the diagram should not be interpreted as a process according J.J.G.) of the diagram. After that the content vanishes from the experience elements.

There is much resemblance between Garret’s diagram and the four layer model I nicked. Luckily the four layers are more abstract and do not define what media is crafted on each layer. And that is the whole idea.

The four layers should be applicable on various disciplines and various media. Not just web design. In my opinion layered thinking works even better on services that do not run in browser.

Content lives on multiple layers, just as does the user interface

The findings and vision formed on service layer affects just as much to the content as it affects to service structure, functions etc.

To needlessly theorize, service could be something like: content + interaction = service.

Or even better: content = motivated choices for offering something + someone consuming something.

Well, I try to say that you cant separate content from the service, and content is not a commodity (as Halvorson says) or content exists only when someone consumes it or acts on it. (Much in a same way as Pierce’s sign process that exists only when some one is interpreting).

One professional can be responsible of many aspects of the content

Layes and expertiese: copy writing usually presentation or task, infra stuff cant solve busines problems Someone writing instructions or aggregating archived TV broadcasts works both on presentation layer and on task layer. Hopefully someone else (preferably a pro too) has defined what kind of interaction and presentation the service needs (as wearing too many hats is though regardless of the skills one possesses). The goals ans criteria for the product acceptance should be defined as well.

And based on that info the professional should do the editing to produce something that is understandable and recognizable as well as useful and formatted suitably.

Two bad examples (they are always easier to find)

Local home electronic store has store locations on their page. Who ever edited the photo of the store and map there did not do too god job on supporting the user goals. (He might have just filled blanks in a wireframe, and that is what makes wireframes not so good tool in my opinion).

Here is what the photo and the map looked like:

The map omits all the landmarks (if the mighty IKEA store would be on the map you would immediately know why you had not seen the store before). If roads would indicate where they come from and where they go to it would be easier to identify them.

Picture of sore fron door and a map with a little points of interest Thinking a goal for finding your way to the store the photo does not help much. It could be from any of those stores (and I bet it actually is from a wrong one). The map is not much better. The road running on north side of the location is not labelled. The road on south is labelled, but with different name than is commonly used (and is used in the text on the same page!). The road running by the location has two names – it apparently changes the name in somewhere middle. The two place names (Lommila and Karvasmäki) are virtually unknown to the mundane. Off to google maps trying to find the place and direct via phone someone driving out there.

Editor choosing pictures did not only fill the demands of presentation layer by posting those pictures, but was involved in a task of finding relevant information from that page. Textual explanation somewhere above the pictures on the page was actually much better than the visual explanation, but the not so good text styles could not compete with the images and were unnoticed.

So here we have two or the professionals working on presentation layer and task layer to support service layer’s demands. Editor choosing the pictures, editor writing the instructions (took care of the coherent presentation: style, tone of voice of text and task explaining how to get there), graphic designer making the page layout.

Text as user interface

In tram/cabe car I noticed a message on the info display. There was a note telling about changes in local healthcare services. Info was grouped nicely to speech bubbles, but it was not understandable. Not to bother you with finnish oddities the structure was following:

Health care function A was moving to organizational unit B. People were noted that during weekdays they should visit organizational type C and during the transfer they should go to city district D.

Now fitting that all to those small bubbles needed some thinking, but was full of health care organizational jargon. To make task better (and perhaps violating some presentation rules) the same thing could have been written differently. Same time the bulletin could have explained what the different functions actually do:

If in urgent need of medical attention go to organizational unit B instead of A. In non urgent situations use your organizational type C. (During the night of transfer, urgent attention is available at organizational unit D).

Users do not create content, individuals do

Posted on: September 17, 2010 by: Vesa Metsätähti in:

Content Strategy for the Web Kristina Halvorson's Content strategy is good reading (http://www.contentstrategy.com) For a moment I thought that the strongest “user created content” hype was over, but again and again the wish surfaces. Actually the goal is pretty valid. When building something for the user having them to participate is a good idea.

As Kristina Halvorson points out in Content Strategy

This is a fairly complicated, surprisingly resource-intensive approach to sourcing content. If you build a user-generated content forum, it doesn’t necessarily mean that they will come. And if they come, it doesn’t mean they’ll stick around. Engagement tactics are key, as are resources that will moderate and respond to content and comments.

Do you really want the users to influence your service, or individuals with a desired approach and goal to get involved?

“The users” is really broad audience. Think of what kind of discussion is going on in your local news sources anonymous commenting system. Do you want to transfer that to your service? I did not think so.

So even if you are running a service that delivers to users trough browser (I avoid saying website) you probably have some kind user categorization. Personas perhaps? Who of those you want to influence your service and in what way?

If you want to have your content or functions grouped by users, you should not be looking for the generic elastic user, but some goals or motives. Other vice you will have that worst case scenario at hand and it will take more effort to fix it than original work fron you woud have taken.

Four layers for design work means motivated design choices

The four layers are about being able to recognize what is going on, and where you need to do what to get where you want.

With user participation you need to decide what you want your specified types of users to do so that it helps you service to support the goals that you are targeting. In a same manner you are creating a process ans systems in infrastructure layer for internal actors you need to create a process for user participation.

The user participation process should have the hooks and rewarding system that guides them to work in a way that takes the service in the direction you want. Not to make it a dumpster for bad comments, for example.

User participation will need also tasks and presentation, that helps the users understand what this is all about. And on service layer, as you know your user’s goals, context of service, you can predict what roles you can realistically give them.

Seductve Interactions Stephen P. Anderson has good tools in "Seductive Interactions" presentation (http://www.slideshare.net/stephenpa/seductive-interactions-idea-09-version) Stephen P. Anderson has deviced a good presentation: The Art and Sicience of Seductive Interactions with strategy and means for supporting users’ behavior.

If it works as you expected you have failed

Good design is something that users did not expect. If it just fills their expectations it does nothing revolutionary.

You can’t have a waterproof plan for user participation outcome. And you should not. But you should still have strategy and goals for your service.

Users will behave unexpectedly and that is why you wanted the participation in first place. When you see what the participation and interaction results to, you should make decision on do you have business in that behavior and emphasize it or is your business somewhere else and you should try to change the behavior.

Pages for iPad gets not much use (yet): most of the time there is need to create information, not documents

Posted on: September 12, 2010 by: Vesa Metsätähti in:

Children love to use Pages for "writing practices", with the virtual keyboard mode changing typing is much less probable. While testing iPad I have tried various applications that I could have imagined to be useful in everyday life.

It works as a nice photo album, but that is not needed very often (regardless of which application is used). Doodling is not as easy as it sounds, but I end up drawing quite a lot.

One thing that surprised me was how little I have used Pages. Actually besides some random test typing I have not used it at all. I guess that is because with iPad I usually need to create information, not documents.

If the information ends up to a document at some point the actual document is created or modified on laptop.

People will use most powerful interface available, or take the path of least resistance

Back in days I remember Christian Lindholm stating that People will use most powerful interface available. Point was that if TV remote is more versatile for controlling TV compared to mobile phone, the TV remote will be used. No matter how cool the phone would be.

“people will work like electricity and take path of least resistance”

This might be more of the hair splitting, but since then I have modified that thought to: people will work like electricity and take path of least resistance when tackling a task. (They will however act quite irrationally when pursuing a goal).

What I mean with this is that people will not strive for power, but suitability. (Yes, that has to do with definition of power and that might be what Christian originally ment). Even as controlling TV with mobile’s interface would be very powerful (you get to control a bunch of things very accurately) my wife would still choose crappy old remote over it. Complexity and options that come with the precision and power cause too much cognitive fatigue. Straightforward interaction style works well with TV watching.

There is a threshold of cognitive fatigue you are willing to sustain

Leisure users are more likely to stand bad user interfaces when there is a strong urge to complete a task. (For example death threat). At some point resistance of a product and it’s user interface will grow enough so that some tasks are skipped until something more adequate is available. A whole lot of time can be used to search for DVR’s remote until my mother in law gives up and settles for the universal remote.

For me the pages work in such way. If I’m for example in a meeting and there is a need to communicate with someone, plain text and mail will do. If a document is needed, I’ll usually wait for the meeting to end. Sending a document instead of information would also require much more thought on wording and structuring than when only delivering well set thought.

I do hate it when someone sends a Word document with text that could have sent just as well as plain or rich text in mail body. Perhaps I should reply with similar PDF?

Surprisingly I have sent a Keynote from iPad structuring an outline of a presentation for some one else to complete / build on. Similar structure would just as well work in plain text. I think that the slide format helps to think the final media, document, instead of adding resistance.

What type of resistance are you creating?

So when creating a product, service, fond out how much resistance you are creating for the user? Are the things being designed benefit or cost for the user?

iPad is about new possibilities for controls and connectivity

Posted on: September 04, 2010 by: Vesa Metsätähti in:

I have been toying around with iPad for a week now to find out what it actually is and is it good for anything. Well? Here are some of my findings.

Good reader's dropbox sync on iPad Good reader is a jwewl, fetching stuff from dropbox, mail accounts, & other web storage. First thing I noticed was that the keyboard and typing is much better than what I expected. Everyone is able to tell now much typing on iPad sucks (and most have not tried it), but I think the keyboard lives beyond this expection. (To prove this I’m writing this on iPad – task paper is exceptional!).

If you have your stuff laying around web, you will get up and running fast

For a while now I have had much of my stuff “in cloud” as they say today, I think. Thus when getting my hands on iPad I was able to work / consume stuff just as with my laptop. Books, magazines, etc. On dropbox, documents, reference info, bookmarks in taskpaper etc.

Tesster B however was in habit to store everything on hard drive, so there was not much to do with an iPad for her (except for browsing the internet, what is what she does anyways – but she felt still kind of uneasy / unfamiliar with the pad because of that).

I wonder if you could handle any larger touch screen

Hands on iPad Resting hand or touching with wrist produces unwanted results: angry birds zooms instead of hurling birds in to their doom. Comparing how kids and adults use iPad I noticed one difference. Kids fail with the touch interface reguraliry because they rest their other hand on the screen while touching with one.

So perhaps if the screen would be larger maybe adults would be tempted to rest their hands on the screen as well? (Think about reading a book and not being able to hold your fingers on the paper or drawing on a paper and not being able to touch it with anything else than the tip of the pen?)

While direct control interaction style is pleasing the touch interface is not ergonomic.

Bird and death machine drawn with iPad Drawing this on iPad was easier that I would have expected, but it strained my fingers, wrist and shoulder. Massive headache for the following day. Kids love using iPad even over iPhone of iPod touch: you can grab the icons / screen with your hand instead of poking with your finger. They get really fast what happens where, much faster than using mouse/trackpad and keyboard. Less abstraction on gestural interface and direct control (except when the activities are abstract and gestural metaphors can’t be applied).

For grown ups the direct control is great too. There is enough accuracy to click links on web pages too. Where the kognitive friction is lighter extensive use of touch screen is physically tiring. Drawing the example image would be no special strain to hand, but doing it with iPad caused tenosynovitis and massive cramp on shoulder. Same goes with typing something longer (as this) because you can’t rest your arms as easily while typing with keyboard that rests on table. (Will not work with iPad because the screen is extension of keyboard…).

This reminds me of the joystick mice that were big thing some years ago. They might have reduced strain on wrist, but destroyed your shoulder because that was what you were moving the mouse with. Same goes with finger on touch screen compared with pen and paper, on paper the pen’s support comes from wrist, where with touch screen you need to activate all muscles from fingertip down to your butt (if you are sitting).

Big thing is possibility for new way of controlling things, not neccessary viewing them.

Loopseque on iPad Loopseque in use on iPad to control musical performance. (http://loopseque.com/) The touch screen on iPad is suprisingly comfortable and accureate. User interface is well designed for touch control. Even as I enjoyed (!) reading books on iPad I expect the future to bring new wow in context of controls.

I do not expect any funky new ways to drawing with toes and scaling by pinching with tongue and nose. Form factor and the screen would be very good for universal remoting – dashboarding DVR (if you still have one next year) or something more complex. Good examples can be found from music scene, where devices and software already can be controlled with better or worse iPad interfaces.

iPad is more intimate than laptop, PC and is used surprisingly differently

iPhone has a ton of differrent useful and useless applications. It connects quite well with many services and communication channels. Asking people what goals iPhone supports they would propably mention “keep in touch with my friends/business” or “finding information where ever I need it”. And it does that stuff quite well.

What people actually do with iPohone when you observe them is that they toy around with it (with an excuse of keeping in touch to their friends). They twiddle with the interface and launch bunch of differently programs. iPhone gives something to do for your fingers and gives an illusion of doing something useful. The lack of multi tasking has so far made the use of iPhone much simpler and accessible than some competitors with complex (and usually unknow to users) multi tasking models. So the iPhone fits well to “need to do something because I’m bored or need stimulation” mentality.

Laptop has longer usage sessions (duh). If iPhone is controlled intensively during the sessions, laptop is more for viewing. Observing regural, real people using laptop at home they use it 80 % of time reading / watching and thinking / experienceing. Using laptop is active very sporadically. It is left open to keep you company with the previously viewed web page. Laptop use is more relaxed and you can step awoy from it when ever you want without putting it in to your pocket or setting it on table. Laptop can be running more on the background while you swithc your concentration to something else, or just sit there whike you watch tv.

Poses for using iPhone, iPod and laptop
iPad is quite different. It is too large to fullfill need for occasional stimulation or twiddling around with the interface. It is more immersive and demanding. On the other hand iPad can’t be set up wit the array of information you want to be peeking and set up on table to wait your attention in that state. On iPad the upcoming multi tasking will be great, since you need to be able to switch from a contect to another with it more than with a phone. For example writing this on browser and checking other browser windows for links or for something completely different is great. With an applications and leaving and opening browser for it would bee too much cognitive friction.

So iPad will not fill needs of background activities and information, even as from usage and cababilities point of view it could. and that is why it feels but strange and won’t replace laptop.You can’t simply step away from it but need to always put it “away”.

Reading and intensive sessions work well with iPad

I was suprised how comfortable reading was on iPad. I felt some wow when I was able to walk easily around the house with the device and continue facebooking in kitchen (yeah, truely revolutionary). Taking notes in a meefing or during face to face session was much comfotable than with laptop (a barrier) or iPhone (requires too much focus due to small keyboard).

So what is it? I don’t know, but I’m quite addicted using it for some tasks. Lets see after a month if I’m still using it for something.

Touch screens might destroy humanity and iPad has one

Posted on: August 29, 2010 by: Vesa Metsätähti in:

Don Norman puts a critical eye on touch devices with his essay GESTURAL INTERFACES: A STEP BACKWARDS IN USABILITY. In many points I have to agree with him. Trough recent years many technologies are labelled and argued “easy to use” while the label should have given to their application in a context.

Software is claimed to be user friendly if it

  • is menu driven
  • uses high contrast colors in GUI
  • is controlled with hard keys
  • is controlled with soft keys
  • etc.

From design layers perspective the thing is to choose relevant UI technologies to help activities on task layer to help to reach goals defined on service layer. You can design a device with only mechanical controls that has bad interactions and does not support users goals. You can certainly do the same with gestural UI.

From genius design to activity or experience oriented design

No entry for power driven vehicles, driving to a property excepted. "No entry for power driven vehicles, driving to a property excepted". Quite an unnecessary sign for dead end road with just houses but still perfectly legal. In Norman’s essay inadequate guidelines receive the most attention in it’s many forms. “If there is no good set standard for X you should not be doing it”. “Complying to standards“http://www.4layers.com/thoughts/articles/10/layers-for-design-and-parking-at-airport#maturity can produce bad design and clumsy transportation planning can be a good example of it: while law might require or give room for some solutions it does mean that they would support any drivers’ goals.

For example a short dead end with only houses on it might (and does!) have “No entry for power driven vehicles, driving to a property excepted” sign. If there would be a problem with juves recklessly roaming with their mopeds there would be an issue to be solved with that sign. If it is there for just the sake of regulation (“We put these on side roads on this neighbourhood”) — not to support an activity or experience — it adds complexity and maintenance work.

And back with gestural interfaces… choosing touch screen as an interface just because it is trendy is quite a bad decision.

Scalability, feedback and visibility?

One thing that has always been confusing to me has been a strive to have uniform controls/feedback trough all devices, small or big. No, every remote control should not have their own conventions, but for example there is no need for phone and laboratory analyzer to work similarly if they are worked with gestures.

I see where Norman is coming from: there might be some basic activities that would be smart to have common gestural ground. Just as you should not have doors that are pulled open by pushing.

A normal windows GUI complying with standards is quite bad when graphics are scaled up and put on a touch screen. It looks like windows for disabled and makes you feel one.

That is where feedback steps in: due to their nature gestural interfaces need a heap of exaggerated feedback. While physical keys move and make sounds touch GUI does not. Thus you need to use secondary media to communicate with user.

Presentation layer is very important when communicating what is possible and doable – what can be expected. It is possible to draw an interface that does not hint to what the product is capable of. Some times I think it is even good to leave out visual representation of certain shortcuts and activities that are for more power users. (Just give enough feedback so that if someone accidentally triggers one they will realize what has happened).

What actually is problem with gestural interfaces?

When interface moves to more abstract level the conventions and rules do not help that much. Thinking them with same criteria and paradigms as for example with ATM does not work well. As there are more combinations and what the interface actually is more vague, the quality of interface and interaction is more about machine guessing what you are meaning than how the UI is deviced.

Thinking gestural interface without screen: all of your gestures, change of pose, movement of eyes, waving hands, might be a gesture ment to control a product, or it might be not. (Yeah, those “wave your hands in air and stuff happens” interfaces have been so far very bad experience, as use of buttons is just replaced with arbitrary gestures, that might or might not be considered as interaction).

The good old voice control goes in to same category. The product needs to know that you are addressing it (instead of someone or something else in the same room) and what you actually want, even as you as the user would not know what all functions they are capable of. Think of voice controlled owen and boiling an egg. How do you now what commands should and can be given? If there is a list would it not be simpler to just have corresponding buttons? Specially if you need to get owen’s attention with, for example pushing a button in order to speak a command.

What machine with voice controlled interface should really have is enough AI to understand what you mean and how it’s actions will affect current situation. Then act if result would not be instant disaster and confirm from you that what it is doing is around what you wanted.

And this all leads to…

This post was supposed to be about what you can actually do with an iPad. Looks like I’m not getting there this time. Gestural interfaces, such as touchscreen on iPad sure have potential problems. But more importantly: is that tablet any good? Truth (or opinion) will be revealed soon.

Layers for design and parking at airport.

Posted on: June 07, 2010 by: Vesa Metsätähti in:

I gave a ride to my wife and two kids to airport as they were off to visit grandma. Helsinki-Vantaa airport has “recently” done some changes with terminals, so dropping family to T2 was new to me.
Arrivals and departures
Traffic was directed via large signs: two lanes up the ramp were for departures. Little did I know that real meaning of Departures was: “If you are taxi, then you can drop your departing passenger here”. At the Departures area there was no place for ordinary people to stop their car, not to talk about short term parking.

As we were in hurry I had no time to dive trough the Departures area and drive around the airport to choose another exit (as now I understood that Arrivals meant Departures for ordinary people). I chose spot with no aggressive “do not park here” signs and helped my wife to hold the boys and get stroller and other luggage in. (Mutsy urban rider does wonderfully this too!)

Naturally there was a policeman assigning a parking ticket for me as I returned 5 minutes later. He asked me for a reason for such an offending parking. Explanation with hurry, unclear and misleading signs and getting two toddlers in to plane did not do. (The question was raised probably to just scold me).

Too bad, you got the ticket — so how does the layers involve here?

Parking at Helsinki-Vantaa is a bit complicated at the moment (or was when I was last doing it with the new system).

Doing better at presentation layer — with the signs — would clearly help. Adding info on where just drop off people and where to go if you need to assist someone to get to the plane would help to make better choices while driving.

Working with infrastructure would help too. Whoever is enforcing the parking rules could have guidelines for certain flexibility.

Task and service layers should receive more focus. The signs or helpful staff do not help if you have trouble finding in or getting there safely (crossing multiple streets, departures enter trough arrival doors and have to fight their way trough passengers arriving).
Design Decision Styles by Jared M Spool
Sure, the signs are probably easiest thing to influence in all this. But as my colleague has complained often enough: while doing sign design or redesign he needs to come up with other arrangements as a by product. So professional Genius design on presentation layer yields Unintentional design on other layers.

Jared Spool has wonderful presentations / model about different design styles and about choosing one. Intentionally.

Defining interaction style helps work on multiple layers

Posted on: November 02, 2009 by: Vesa Metsätähti in:

Interaction style is something I was thinking to define back in late 90’s. Early in 2000 there was not much “leisure” type software (not counting games as they are a bit of different stuff than what we are looking for here). Software (and hardware?) for consumer digital photography — specially the one bundled — was quite awful.1 Although the idea of defining interaction style got going with tangible products and user interfaces this model was brushed up in software related projects.

The model became very important with software embedded products and services. Some of the such projects that I have worked with treated the physical side of the product as a sculpture, and design of it’s behavior as a totally different unrelated project.2

The reasons for needing a way to define interaction style was that:

  1. the way something works defines it just as much as it’s physical appearance
  2. the way something works needs to correlate with the physical appearance and other manifestations of product or service (advertisements, packaging, physical appearance, etc. give promise on what the product or service will be like, if they tell a different story or lie the customer will be extremely disappointed: “this car looks like Audi but drives like Opel”)
  3. in the future with ubiquitous computing what defines the experience is not what we use but how we use it (you might be using handset of X but control device of Y – and the X might be quite transparent: “on this car my steering wheel drives like an Audi”)

Interaction style can be defined with the relationship of controls and feedback

The range of controls in the model is from casual to conscious. Casual controls include anything that has not been specifically designed for the use (or where use is not specifically designed for being controlled by), and/ or that do not provide high, exact, motivated or sensible way to tackle the task.

In this model the feedback ranges from plain to rich. Feedback is a bit tricky to evaluate since it doesn’t not naturally fit inside such a one dimensional continuum. Rich interaction would be something like a melody instead of simple sound. Richness or plainness is relative and is found by comparing feedback to what else it could (realistically) be or what other conventions there currently are or have been.
Interaction style diagram, feedback from plain to rich vs. controls from casual to consicious
In our diagram feedback is located on the horizontal axis and controls on the vertical axis. Thus in the lower left corner we have an area where we have find plain feedback and arbitrary controls. In the upper right corner we have place for stuff with rich feedback and conscious controls. The interaction in rich feedback and conscious controls can be categorized as tool like interaction. Interaction with rich feedback and arbitrary controls can be categorized as toy like interaction.

In a sense this describes the relationship of stuff on the task layer and stuff on the presentation layer. More importantly, this is a tool for work in the task layer to define what kind of feedback the presentation layer should consider. It is up to the presentation layer to decide what the feedback actually is.

Feedback changes between primary and some secondary media

Interaction style diagram, secondary media strongest in rich-casual corner, primary media strongest in plain-consicious corner

Primary media is what you would think of as a natural output or feedback of your actions. Secondary media is something added, more artificial etc. For tool like interaction secondary feedback will give more detailed view of the matter, usually in such detail that it will be redundant for a layman. With toy like interaction the secondary media will make the natural feedback more visible, understandable or cover lack of natural feedback.

As an example four different audio tools. For all of them the primary media is sound.

Audio tools as an example of interaction style

In lower right corner we have mini stereo system. There can be a lot of controls (shiny ones) but they perhaps have little or inaccurate effect on the sound the stereo system produces. That does not really matter for the user to know the stereo system sounds good through the flashing lights it has: the secondary media. Clearly a case of toy like interaction.

In lower left corner we have portable radio. Not too many buttons and not too deep controls. For feedback to know that you are playing the correct radio station is enough (even as they tell what station you are listening to, they fail if you do not recognize it through the selection in program). Lets call this straightforward interaction.

In upper left corner we see record players of a DJ. There can be plentiful controls but the largest are the most important – by spinning the decks the DJ is in full control of how the music is played. Even as there can be a handful of feedback to help DJ in her job, the real professional trusts only her ear listening how the music sounds. Direct manipulation perhaps?

In upper right corner is an image of studio mixer. Studio has accurate control on many things and technician trusts in her ear evaluating the sound. Additionally she will get a bunch of additional feedback on sound levels etc. Tool like interaction.

There is no right combination and good usability can be found in all of them

Unlike with some diagrams there is no preferred combination. The “correct” choice depends from the context and goal. (Working with a high-end lawn mover should not feel toy like – unless that is for some reason specifically decided to be the thing that sets it apart from the rest. But even then the physical and visual world should have some sort of connection to the toy-like-ness or it will feel detached and unmotivated).

Good design and usability can be found from each category. (The straight forward portable radio is actually the most stylish one of the example devices). The important thing is to know what part of the product or service will fall in to which quadrant and communicate it.

Sometimes thinking of one quadrant is forced on every part of a product. But not everything has to behave similarly — it is more important to behave appropriately than “logically”. In a health care device the examination process can be direct manipulation or perhaps tool like interaction (depending on the application of course). Selecting a current patient from database should however be straightforward and not forced to work like direct manipulation for example.

Now how does this relate with the 4 layers for design?

The interaction model is useful tool for:

  • generating designs — trying out how the product or service would be like if executed in another quadrant for example
  • validating design — evaluating if the design fits in the defined interaction style
  • communication — it is easier to get the design right if different professionals have common terminology for this kind of abstract stuff

Interaction style is useful tool for presentation, task and service layers
Much of the documentation related to design work is directed at the infrastructure layer. Interaction style is something that is useful as a mental tool for the designer and as a communication method for other designers. It will be less likely useful as documentation for implementation.

Footnotes aka. Excel turns to a bad toy when added with “fancy GUI” etc.

1 The software bundled with cameras (and most early digital boom image editing and organizing software to that matter) was tool like interaction stuff that did not really relate to photography. To make them look more humane they were added with bloated graphics and some times more “advanced” functions were hidden somewhere, resulting in sorry toy like interaction coupled with bad usability.

2 The products or services were thought usually through models (like Peter McGrorys 4 dimensions of product identification) that were then forcibly applied to statue like design objects — with some vague brand value on background and physical features that were designed in the end (and value, behavior happening magically somewhere between).