Posted on: September 05, 2011 by: Vesa Metsätähti
in: Just thoughts
Tags. Metadata, keyword, used to describe information. Usually informal and non-hierachial. I guess lack of structure and occasional inline use defines them from other type of keywording.
Some of their functions (providing context, making finding easier, some classification, etc.) would be cleared with good headlines for both: man and machine.
Much of the social media the tags are used with is almost size of headlines themselves. Putting a headline on a tweet would not a be good idea.
Tags would need structure to make (social) media more casual
For disposable media informality of tags might not be such a big problem. You tag tweet to get it connected to something now. Not to build a library or archive.
Even with stuff that does not matter the day after tomorrow it would be good to know your tags a bit better. For some one to understand what currently used tags are about there needs to be a description of some sort. Meaning of plain language tags can be veiled too (as symbols do not always carry their literal value).
The beginner, intermediate, expert -model that is described in About Face for user instructions suggests that beginner and and expert segments have high turnover rate. No one plans to be prepertual beginner. You rather quit doing something or get a hang of it pretty fast.
Expert drops pretty fast to intermediate too when taking breaks from the expert usage.
So the tags… if they are described only by them selves and sometimes very crypic content you need to be an expert of the media and subject to understand what they are. In the rapidly changing domain of social media you need daily effort to know what is going on or where to find something.
And since much of the processing of social media is tag realted (or at least tries to be tag related) the new users will have hard time jumping to the social media band wagon.
Example of Big Brother’s big move in –night
I decided to see how is Twitter used during mandatory Big Brother season of 2011 in Finland.
Now Big Brother appears with the very same all over the world. Making a BB search in Twitter would not be of any good.
The official site did not announce any tag, neno was presented in the opening ceremony. For some reason people I know in other social media did not talk about it much so I found no tags there.
After few searches I found official handle for the BB season in Twitter and in their description was also the tag (#bbsuomi).
There was one lonely person who valiantly commented what was going on during the opening night’s TV-broadcast. “Is twitter this small in Finland or jst in this demography?”, I wondered. Later it turned out that I should have magically known that the correct tag to follow would have been quite different (#bbstudio).
I have to admit that I’m not an expert of Big Brother: word of mouth on the subject does not get to me. Twitter is not my piece of cacke either, so I’m a bit unaware of what are the standards of choosing a tag this week.
How would structure help someone to get started
Because the social media should be social that info would be good to introduce in the context if there is one. Revolution in a vountry does not have a context in a same way as a TV show.
But lets say you want to know if people are talking about some of Big Brother contestors right now. (Bored at school, yes?). How to get on with it via tags?
Even if tags eould not have hierarchy there could be a “wikipedia of tags”, explaining that there is Big Brother season 7 running in finland, the stuff is discussed via tags #bbsuomi (which is the official feed and seems to be used to tag random stuff found from media) and #bbstudio (which is the feed for fans to discuss about latest events).
Tagging could go on as usual, people just create them, but in case of bigger event some one could connect them to a broader subject and explain the differences.
Having the info to refer to some neutral 3rd party content would also prevent building an internet inside internet. Instead of letting tag only events, things that have presence in a service (and the really important will not have – like revolutions) you would have stuff to refer for example wikipedia. Or why not include that info to wikipedia and have it presented via api in verious services?
Posted on: August 19, 2011 by: Vesa Metsätähti
in: Just thoughts
You how people are always scolded on how they dislike waiting in traffic lights? Well, it is not they, but we. And you notice this usually while driving a car but can experience anxiety also when moving on foot.
“Irrational, only a minute away from your day”, they say (they who are not currently behind the steering wheel). Well, perhaps not only a minute, but still after speeding through couple of lights you notice how you linger on tasks wasting time without worries.
People are not bad, not that irrational — it is just that rationality is very limited by context
Comparing extra 5 minutes delay that the traffic lights caused to the extra 10 minutes you spent on Facebook makes anger for red light to look dumb indeed.
But the delay should be compared to the context: you might spend 1/3 of travel time actually driving have 2/3 spent on delays caused by red lights.
Lesson: do not judge people irrational or dumb when doing design or customer service
If you think someone’s reaction in a situation stems from the person being irrational or just plain evil you should think again. Yes, you can encounter someone incredibly dumb, but changes are that you did not understand the context and emotions it triggered.
And if you do not understand the context and motives your service or design will be impaired.
Extra: UK town shuts down traffic lights with surprising results
In this video you can see how shutting traffic lights eased traffic in an UK town. I don’t think that it would work this well everywhere, but I’m surprised how little in this economy and climate talk wasted time and Co2 emissions caused on standing in lights are exploited as arguments.
Posted on: February 19, 2011 by: Vesa Metsätähti
in: Just thoughts
Parking garage at work is changing it’s rules during early 2011. The core problem seems to be that some workers use the parking garage for long term parking. Workers, yes, since access to the garage is only and for all with access key.
My guess is that the goal is to offer more available parking spaces.
Solution? Require parking permit visible on the vehicle (every worker can apply one) and max one week of continuous parking is allowed (when on business trip etc.). Suggested use for the garage is only for short term parking.
A private parking control company is authorized to enforce the rules.
More effective parking as design assignment:
If solving the problem of “I’ll store my car here till summer” would be a design brief I would first try to make sense of the brief via design strategy and making a hypothesis of the goals.
So, focus, definition, value, scope…
The parking garege offers parking space during work and shelters company vehicles. Get rid of long term parking to offer more every day parking opportunies for workers. Do it without building anything new or alterling the layout of the parking garage. Making parking better should take place in Q01 of 2011
Company: Get rid of the long term pakrking? Get rid of the complaints? Make maintenance easier (cleaning might be pain if there are cars that you can’t get rid of)
Employee: park car safely without hassle or fear of damage and hidde costs, to get to work faster.
To measure success…
How many minutes emplyees use looking for free parking spot daily, between 7-8, 8-9, 9-10
How many hours twice a year maintenance takes.
Goal grid and initial impression of suggested solution:
To do this lets imagine a goal for private parking company that gets it’s income from parking tickets it writes:
collect as much money as possible by ticketing cars in assigned areas.
The goal grid is a way of doing cost/benefit evaluation comparing solutions to goals.
Make maintenance easier, offer more space
Care free parking
|Ticket (increasing amount of) cars
|| Cost if behavior does not change
|| Cost if you need to be afraid of tickets → adds stress to parking
|| Benefit if behavior does not change
So, at least company’s and parking control’s goals contradict. If people give up their bad ways and there are less troublesome cars parking control loose money. And if the aim is to make parking easier for the employee there is an additional stress factor added even if you get rid of those five car carcasses.
There is no word yet what are the rules on what the parking control can ticket. From employee’s point of view more troublesome than the (unknown amount of) car casses are the cars hogging two parking spaces.
Ticketing parking space hoggers would not necessary lead to more free space either. The ticket does not move the car.
Let’s add another solution of someone calling to the owners of badly parked cars.
Make maintenance easier, offer more space
Care free parking
|Ticket (increasing amount of) cars
|| Cost if behavior does not change
|| Cost if behavior does not change AND you need to be afraid of tickets → adds stress to parking
|| Benefit if behavior does not change
|Call the owners (and tell them move their cars)
|| Benefit if you can get hold of them and they move their cars
|| Cost if badly parked cars are not re-parked instantly
|| Cost if behavior changes
Cost benefit ratio did not change, but at least there are no contradicting conditions. And that is what the goal grid is about. You do not get too far with numbers of costs and benefits but the conditions are valuable. Conditions can be formed in to requirements that explain the goal.
Side note on private parking control
First you should find out why people are parking badly. Is it because they just do not care? Then be my guest and get someone to ticket them away.
But if people put there cars to where there are no real parking spaces etc. the reason might be also that there is not enough parking space and ticketing does not solve it.
The private parking companies usually claim that they keep the emergency routes clear and tidy up the parking area. No they do not. Ticket does not clear up emergency route. The car needs to be towed away. They do act as deterrent but that effects all who are perking their cars as their procedures are unclear and they are keen on interpreting parking so that they can ticket as much as possible.
Private parking control should work as an enabler instead of punisher. And I guess that is called valett parking.
Posted on: September 12, 2010 by: Vesa Metsätähti
in: Just thoughts
Children love to use Pages for "writing practices", with the virtual keyboard mode changing typing is much less probable.
While testing iPad I have tried various applications that I could have imagined to be useful in everyday life.
It works as a nice photo album, but that is not needed very often (regardless of which application is used). Doodling is not as easy as it sounds, but I end up drawing quite a lot.
One thing that surprised me was how little I have used Pages. Actually besides some random test typing I have not used it at all. I guess that is because with iPad I usually need to create information, not documents.
If the information ends up to a document at some point the actual document is created or modified on laptop.
People will use most powerful interface available, or take the path of least resistance
Back in days I remember Christian Lindholm stating that People will use most powerful interface available. Point was that if TV remote is more versatile for controlling TV compared to mobile phone, the TV remote will be used. No matter how cool the phone would be.
“people will work like electricity and take path of least resistance”
This might be more of the hair splitting, but since then I have modified that thought to: people will work like electricity and take path of least resistance when tackling a task. (They will however act quite irrationally when pursuing a goal).
What I mean with this is that people will not strive for power, but suitability. (Yes, that has to do with definition of power and that might be what Christian originally ment). Even as controlling TV with mobile’s interface would be very powerful (you get to control a bunch of things very accurately) my wife would still choose crappy old remote over it. Complexity and options that come with the precision and power cause too much cognitive fatigue. Straightforward interaction style works well with TV watching.
There is a threshold of cognitive fatigue you are willing to sustain
Leisure users are more likely to stand bad user interfaces when there is a strong urge to complete a task. (For example death threat).
At some point resistance of a product and it’s user interface will grow enough so that some tasks are skipped until something more adequate is available. A whole lot of time can be used to search for DVR’s remote until my mother in law gives up and settles for the universal remote.
For me the pages work in such way. If I’m for example in a meeting and there is a need to communicate with someone, plain text and mail will do. If a document is needed, I’ll usually wait for the meeting to end. Sending a document instead of information would also require much more thought on wording and structuring than when only delivering well set thought.
I do hate it when someone sends a Word document with text that could have sent just as well as plain or rich text in mail body. Perhaps I should reply with similar PDF?
Surprisingly I have sent a Keynote from iPad structuring an outline of a presentation for some one else to complete / build on. Similar structure would just as well work in plain text. I think that the slide format helps to think the final media, document, instead of adding resistance.
What type of resistance are you creating?
So when creating a product, service, fond out how much resistance you are creating for the user? Are the things being designed benefit or cost for the user?
Posted on: September 04, 2010 by: Vesa Metsätähti
in: Just thoughts
I have been toying around with iPad for a week now to find out what it actually is and is it good for anything. Well? Here are some of my findings.
Good reader is a jwewl, fetching stuff from dropbox, mail accounts, & other web storage.
First thing I noticed was that the keyboard and typing is much better than what I expected. Everyone is able to tell now much typing on iPad sucks (and most have not tried it), but I think the keyboard lives beyond this expection. (To prove this I’m writing this on iPad – task paper is exceptional!).
If you have your stuff laying around web, you will get up and running fast
For a while now I have had much of my stuff “in cloud” as they say today, I think. Thus when getting my hands on iPad I was able to work / consume stuff just as with my laptop. Books, magazines, etc. On dropbox, documents, reference info, bookmarks in taskpaper etc.
Tesster B however was in habit to store everything on hard drive, so there was not much to do with an iPad for her (except for browsing the internet, what is what she does anyways – but she felt still kind of uneasy / unfamiliar with the pad because of that).
I wonder if you could handle any larger touch screen
Resting hand or touching with wrist produces unwanted results: angry birds zooms instead of hurling birds in to their doom.
Comparing how kids and adults use iPad I noticed one difference. Kids fail with the touch interface reguraliry because they rest their other hand on the screen while touching with one.
So perhaps if the screen would be larger maybe adults would be tempted to rest their hands on the screen as well? (Think about reading a book and not being able to hold your fingers on the paper or drawing on a paper and not being able to touch it with anything else than the tip of the pen?)
While direct control interaction style is pleasing the touch interface is not ergonomic.
Drawing this on iPad was easier that I would have expected, but it strained my fingers, wrist and shoulder. Massive headache for the following day.
Kids love using iPad even over iPhone of iPod touch: you can grab the icons / screen with your hand instead of poking with your finger. They get really fast what happens where, much faster than using mouse/trackpad and keyboard. Less abstraction on gestural interface and direct control (except when the activities are abstract and gestural metaphors can’t be applied).
For grown ups the direct control is great too. There is enough accuracy to click links on web pages too. Where the kognitive friction is lighter extensive use of touch screen is physically tiring. Drawing the example image would be no special strain to hand, but doing it with iPad caused tenosynovitis and massive cramp on shoulder. Same goes with typing something longer (as this) because you can’t rest your arms as easily while typing with keyboard that rests on table. (Will not work with iPad because the screen is extension of keyboard…).
This reminds me of the joystick mice that were big thing some years ago. They might have reduced strain on wrist, but destroyed your shoulder because that was what you were moving the mouse with. Same goes with finger on touch screen compared with pen and paper, on paper the pen’s support comes from wrist, where with touch screen you need to activate all muscles from fingertip down to your butt (if you are sitting).
Big thing is possibility for new way of controlling things, not neccessary viewing them.
Loopseque in use on iPad to control musical performance. (http://loopseque.com/)
The touch screen on iPad is suprisingly comfortable and accureate. User interface is well designed for touch control. Even as I enjoyed (!) reading books on iPad I expect the future to bring new wow in context of controls.
I do not expect any funky new ways to drawing with toes and scaling by pinching with tongue and nose. Form factor and the screen would be very good for universal remoting – dashboarding DVR (if you still have one next year) or something more complex. Good examples can be found from music scene, where devices and software already can be controlled with better or worse iPad interfaces.
iPad is more intimate than laptop, PC and is used surprisingly differently
iPhone has a ton of differrent useful and useless applications. It connects quite well with many services and communication channels. Asking people what goals iPhone supports they would propably mention “keep in touch with my friends/business” or “finding information where ever I need it”. And it does that stuff quite well.
What people actually do with iPohone when you observe them is that they toy around with it (with an excuse of keeping in touch to their friends). They twiddle with the interface and launch bunch of differently programs. iPhone gives something to do for your fingers and gives an illusion of doing something useful. The lack of multi tasking has so far made the use of iPhone much simpler and accessible than some competitors with complex (and usually unknow to users) multi tasking models. So the iPhone fits well to “need to do something because I’m bored or need stimulation” mentality.
Laptop has longer usage sessions (duh). If iPhone is controlled intensively during the sessions, laptop is more for viewing. Observing regural, real people using laptop at home they use it 80 % of time reading / watching and thinking / experienceing. Using laptop is active very sporadically. It is left open to keep you company with the previously viewed web page. Laptop use is more relaxed and you can step awoy from it when ever you want without putting it in to your pocket or setting it on table. Laptop can be running more on the background while you swithc your concentration to something else, or just sit there whike you watch tv.
iPad is quite different. It is too large to fullfill need for occasional stimulation or twiddling around with the interface. It is more immersive and demanding. On the other hand iPad can’t be set up wit the array of information you want to be peeking and set up on table to wait your attention in that state. On iPad the upcoming multi tasking will be great, since you need to be able to switch from a contect to another with it more than with a phone. For example writing this on browser and checking other browser windows for links or for something completely different is great. With an applications and leaving and opening browser for it would bee too much cognitive friction.
So iPad will not fill needs of background activities and information, even as from usage and cababilities point of view it could. and that is why it feels but strange and won’t replace laptop.You can’t simply step away from it but need to always put it “away”.
Reading and intensive sessions work well with iPad
I was suprised how comfortable reading was on iPad. I felt some wow when I was able to walk easily around the house with the device and continue facebooking in kitchen (yeah, truely revolutionary). Taking notes in a meefing or during face to face session was much comfotable than with laptop (a barrier) or iPhone (requires too much focus due to small keyboard).
So what is it? I don’t know, but I’m quite addicted using it for some tasks. Lets see after a month if I’m still using it for something.
Posted on: August 29, 2010 by: Vesa Metsätähti
in: Just thoughts
Don Norman puts a critical eye on touch devices with his essay GESTURAL INTERFACES: A STEP BACKWARDS IN USABILITY. In many points I have to agree with him. Trough recent years many technologies are labelled and argued “easy to use” while the label should have given to their application in a context.
Software is claimed to be user friendly if it
- is menu driven
- uses high contrast colors in GUI
- is controlled with hard keys
- is controlled with soft keys
From design layers perspective the thing is to choose relevant UI technologies to help activities on task layer to help to reach goals defined on service layer. You can design a device with only mechanical controls that has bad interactions and does not support users goals. You can certainly do the same with gestural UI.
From genius design to activity or experience oriented design
"No entry for power driven vehicles, driving to a property excepted". Quite an unnecessary sign for dead end road with just houses but still perfectly legal.
In Norman’s essay inadequate guidelines receive the most attention in it’s many forms. “If there is no good set standard for X you should not be doing it”. “Complying to standards“http://www.4layers.com/thoughts/articles/10/layers-for-design-and-parking-at-airport#maturity can produce bad design and clumsy transportation planning can be a good example of it: while law might require or give room for some solutions it does mean that they would support any drivers’ goals.
For example a short dead end with only houses on it might (and does!) have “No entry for power driven vehicles, driving to a property excepted” sign. If there would be a problem with juves recklessly roaming with their mopeds there would be an issue to be solved with that sign. If it is there for just the sake of regulation (“We put these on side roads on this neighbourhood”) — not to support an activity or experience — it adds complexity and maintenance work.
And back with gestural interfaces… choosing touch screen as an interface just because it is trendy is quite a bad decision.
Scalability, feedback and visibility?
One thing that has always been confusing to me has been a strive to have uniform controls/feedback trough all devices, small or big. No, every remote control should not have their own conventions, but for example there is no need for phone and laboratory analyzer to work similarly if they are worked with gestures.
I see where Norman is coming from: there might be some basic activities that would be smart to have common gestural ground. Just as you should not have doors that are pulled open by pushing.
A normal windows GUI complying with standards is quite bad when graphics are scaled up and put on a touch screen. It looks like windows for disabled and makes you feel one.
That is where feedback steps in: due to their nature gestural interfaces need a heap of exaggerated feedback. While physical keys move and make sounds touch GUI does not. Thus you need to use secondary media to communicate with user.
Presentation layer is very important when communicating what is possible and doable – what can be expected. It is possible to draw an interface that does not hint to what the product is capable of. Some times I think it is even good to leave out visual representation of certain shortcuts and activities that are for more power users. (Just give enough feedback so that if someone accidentally triggers one they will realize what has happened).
What actually is problem with gestural interfaces?
When interface moves to more abstract level the conventions and rules do not help that much. Thinking them with same criteria and paradigms as for example with ATM does not work well. As there are more combinations and what the interface actually is more vague, the quality of interface and interaction is more about machine guessing what you are meaning than how the UI is deviced.
Thinking gestural interface without screen: all of your gestures, change of pose, movement of eyes, waving hands, might be a gesture ment to control a product, or it might be not. (Yeah, those “wave your hands in air and stuff happens” interfaces have been so far very bad experience, as use of buttons is just replaced with arbitrary gestures, that might or might not be considered as interaction).
The good old voice control goes in to same category. The product needs to know that you are addressing it (instead of someone or something else in the same room) and what you actually want, even as you as the user would not know what all functions they are capable of. Think of voice controlled owen and boiling an egg. How do you now what commands should and can be given? If there is a list would it not be simpler to just have corresponding buttons? Specially if you need to get owen’s attention with, for example pushing a button in order to speak a command.
What machine with voice controlled interface should really have is enough AI to understand what you mean and how it’s actions will affect current situation. Then act if result would not be instant disaster and confirm from you that what it is doing is around what you wanted.
And this all leads to…
This post was supposed to be about what you can actually do with an iPad. Looks like I’m not getting there this time. Gestural interfaces, such as touchscreen on iPad sure have potential problems. But more importantly: is that tablet any good? Truth (or opinion) will be revealed soon.
Posted on: November 02, 2009 by: Vesa Metsätähti
in: Thoughts on layers
Interaction style is something I was thinking to define back in late 90’s. Early in 2000 there was not much “leisure” type software (not counting games as they are a bit of different stuff than what we are looking for here). Software (and hardware?) for consumer digital photography — specially the one bundled — was quite awful. Although the idea of defining interaction style got going with tangible products and user interfaces this model was brushed up in software related projects.
The model became very important with software embedded products and services. Some of the such projects that I have worked with treated the physical side of the product as a sculpture, and design of it’s behavior as a totally different unrelated project.
The reasons for needing a way to define interaction style was that:
- the way something works defines it just as much as it’s physical appearance
- the way something works needs to correlate with the physical appearance and other manifestations of product or service (advertisements, packaging, physical appearance, etc. give promise on what the product or service will be like, if they tell a different story or lie the customer will be extremely disappointed: “this car looks like Audi but drives like Opel”)
- in the future with ubiquitous computing what defines the experience is not what we use but how we use it (you might be using handset of X but control device of Y – and the X might be quite transparent: “on this car my steering wheel drives like an Audi”)
Interaction style can be defined with the relationship of controls and feedback
The range of controls in the model is from casual to conscious. Casual controls include anything that has not been specifically designed for the use (or where use is not specifically designed for being controlled by), and/ or that do not provide high, exact, motivated or sensible way to tackle the task.
In this model the feedback ranges from plain to rich. Feedback is a bit tricky to evaluate since it doesn’t not naturally fit inside such a one dimensional continuum. Rich interaction would be something like a melody instead of simple sound. Richness or plainness is relative and is found by comparing feedback to what else it could (realistically) be or what other conventions there currently are or have been.
In our diagram feedback is located on the horizontal axis and controls on the vertical axis. Thus in the lower left corner we have an area where we have find plain feedback and arbitrary controls. In the upper right corner we have place for stuff with rich feedback and conscious controls. The interaction in rich feedback and conscious controls can be categorized as tool like interaction. Interaction with rich feedback and arbitrary controls can be categorized as toy like interaction.
In a sense this describes the relationship of stuff on the task layer and stuff on the presentation layer. More importantly, this is a tool for work in the task layer to define what kind of feedback the presentation layer should consider. It is up to the presentation layer to decide what the feedback actually is.
Feedback changes between primary and some secondary media
Primary media is what you would think of as a natural output or feedback of your actions. Secondary media is something added, more artificial etc. For tool like interaction secondary feedback will give more detailed view of the matter, usually in such detail that it will be redundant for a layman. With toy like interaction the secondary media will make the natural feedback more visible, understandable or cover lack of natural feedback.
As an example four different audio tools. For all of them the primary media is sound.
In lower right corner we have mini stereo system. There can be a lot of controls (shiny ones) but they perhaps have little or inaccurate effect on the sound the stereo system produces. That does not really matter for the user to know the stereo system sounds good through the flashing lights it has: the secondary media. Clearly a case of toy like interaction.
In lower left corner we have portable radio. Not too many buttons and not too deep controls. For feedback to know that you are playing the correct radio station is enough (even as they tell what station you are listening to, they fail if you do not recognize it through the selection in program). Lets call this straightforward interaction.
In upper left corner we see record players of a DJ. There can be plentiful controls but the largest are the most important – by spinning the decks the DJ is in full control of how the music is played. Even as there can be a handful of feedback to help DJ in her job, the real professional trusts only her ear listening how the music sounds. Direct manipulation perhaps?
In upper right corner is an image of studio mixer. Studio has accurate control on many things and technician trusts in her ear evaluating the sound. Additionally she will get a bunch of additional feedback on sound levels etc. Tool like interaction.
There is no right combination and good usability can be found in all of them
Unlike with some diagrams there is no preferred combination. The “correct” choice depends from the context and goal. (Working with a high-end lawn mover should not feel toy like – unless that is for some reason specifically decided to be the thing that sets it apart from the rest. But even then the physical and visual world should have some sort of connection to the toy-like-ness or it will feel detached and unmotivated).
Good design and usability can be found from each category. (The straight forward portable radio is actually the most stylish one of the example devices). The important thing is to know what part of the product or service will fall in to which quadrant and communicate it.
Sometimes thinking of one quadrant is forced on every part of a product. But not everything has to behave similarly — it is more important to behave appropriately than “logically”. In a health care device the examination process can be direct manipulation or perhaps tool like interaction (depending on the application of course). Selecting a current patient from database should however be straightforward and not forced to work like direct manipulation for example.
Now how does this relate with the 4 layers for design?
The interaction model is useful tool for:
- generating designs — trying out how the product or service would be like if executed in another quadrant for example
- validating design — evaluating if the design fits in the defined interaction style
- communication — it is easier to get the design right if different professionals have common terminology for this kind of abstract stuff
Much of the documentation related to design work is directed at the infrastructure layer. Interaction style is something that is useful as a mental tool for the designer and as a communication method for other designers. It will be less likely useful as documentation for implementation.
Footnotes aka. Excel turns to a bad toy when added with “fancy GUI” etc.
Posted on: October 28, 2009 by: Vesa Metsätähti
in: Just thoughts
In a project the client was a proud business man. Inventor of many fine things. He knew the technology of his product through theories and experience. Now it was time to take a leap and have the product make sense for ordinary people: those who did not understand or care to understand how well designed the details were in the technology. The invention worked, and the task was to craft it in to acceptable and desirable product that would matter to the user. (A very common brief — this is what all designers usually do, no matter of the discipline).
Because of the technological invention the client thought that he would have right answers for every other question as well. (This is not uncommon either). You have to respect experience of someone who has been in business for long time, but what surprised me was the claim that: “it is not possible to do research for a 1st generation product”. (And in this case the invention had been in similar use before).
Sure thing, the surveys and research work does not give you all the right answers but they give you good inspiration and requirements to get started. (Just as well as there are business requirements etc. that you need to consider).
To illustrate how to do some work with users (current or potential…), I used this good old diagram to describe what stuff matters when designing a product or service:
In the core of product or service design is the task.
If you know at least one of these layers you can start doing some research to find out how related activities and goals could be supported. If there is a strategic decision to have a service for mothers of 3 children (user), you can find out what goals there are to support and what practical problems there are to be solved. You can find more about the contexts and environments these people act in. The more strategic decisions there are affecting these layers, the more focused the research will be. In most cases the target for design is the tool, but in my opinion it could just as well be the task, supporting business decisions defining target group, context and environment for service etc.
Another use for that sphere of layers is to illustrate how some very good solutions turn out to be very poor when copied in to another product, service, or situation.
Think about, lets say an online store that sells just about everything from clothes and fragrances, to home electronics and vehicles. This store does not have shops of its own, but it provides the online shop service for other vendors through a unified web service. The product descriptions and presentations are actually tailored to present the product in the vendor’s own service. From there they are copied to the unified store.
Specialized store and general store have common environment and user, but task, tool and context are different.
Small vs. large televisions at Amazon. At least they have the ratings and size in the product title. Amazon is not about such lists either, because it already has reliable reputation and products are well connected with each other.
Transition from a specialized vendor store to unified general store actually changes much. What remains the same are the user and the environment (computer, web browser). If you are in a TV store a big headline like “this is the best price to quality ratio we have to offer” makes sense. The same headline in general store does not tell much (as you might just as well be looking for knee pads).
In a general store this might get solved through very unified presentation. Amazon does offer TVs through the same tool — description and comments on a web page — as it does sell DVDs for example. It is hard to tell from the picture which TV is small and which is large. They do have ratings however, unlike fruugo who relies even more on a bit meaningless image of a product. In addition you can narrow search by color — relevant with clothes but not TV-sets.
Product presentation in fruugo www-store rely on picture, but the picture does not necessarily describe the product well. Filtering TVs by color is nice touch.
That is why we need to know eventually (earlier rather than later) what is on the task layer and what is on the service layer.
Posted on: October 21, 2009 by: Vesa Metsätähti
in: Just thoughts
Yesterday there was once again request for 20,000 feet view of project. I hate working with power point so I had material on carefully designed InDesign → PDF (mix of text that can be read from the document at your own pace and headline styles that are viewable when you project the PDF on a wall).
20,000 feet version was mad by copy-pasting the biggest headlines in to 2 PowerPoint slides. The result was incomprehensible. Not (just) because the headlines would have been vague (as they were written also for stand alone viewing in meeting) or written for another context. They just did not deliver enough alone.
Still, we shared the 20,000 feet version. Feedback was in line with our initial experience. It all sounded good but what did it really mean?
20,000 feet view needs a lot of detail to be useful
I might pay too much attention to the VFH metaphor but if you want to describe an area the actual aerial photo is far more versatile choice than a strong abstraction.
In aerial photo you see much more detail and relationships. You have an idea of how close the houses come to the river on right, how many of them are there, you see trees at the road to the farmhouse etc.
How to implement this when you need to give handout of your presentation? This time the solution was simply to take the four text ridden pages and scale them to one. The wall-readble headlines were still (somewhat) readable and you could see how much of text, images and bullets illustrated a topic. So pretty much the headlines were the same, but now they were presented in at least some kind of landscape — context.
Essentially, I think that giving 10,000 – 50,000 feet views of a subject or presentation should be feature of presentation software, not feature of presentation. PowerPoint at least has ability to show light table of slides, but it is rarely used, at least in presentation context. Perhaps because the slides are usually bastardization of presentation and prose.
In addition to these bit more information dense documents that can be projected on wall I also do slide ware. But thats always unprintable, background for speaking and conversating.
Posted on: October 18, 2009 by: Vesa Metsätähti
in: Just thoughts
In his excellent book How Designers Think, Bryan Lawson writes about design by drawing:
If the designer is no longer a craftsman actually making the object, then he or she must instead communicate instructions to those who will make it. Primarily and traditionally the drawing has been the most popular way of giving such instructions. In such a process the client no longer buys the finished article but rather is delivered of a design, again usually primarily described through drawings. Such drawings are generally known as ‘presentation drawings’ as opposed to the ‘production drawings’ done for the purposes of construction.
However, in the context of this book, an even more important drawing is the ‘design drawing’. Such a drawing is done by the designer not to communicate with others but rather as part of the very thinking process itself which we call design.
Although Lawson’s book is quite architecture orientated the contents apply well to all design. Thinking of layers of design the ‘design by drawing’ applies best to presentation layer. ‘Drawings’ to communicate ideas exist for other layers too, but for the task and infrastructure layers, the drawings are not design drawings, but rather production drawings. Documentation for someone implementing such a system.
Not having adequate tools to think about the task layer and implementation layer design can be a problem. Rapid prototyping and other methods of trying the design are good. And a coder working with the implementation layer can be seen as a craftsman producing the final product (or production tool) in many cases.
But not all products or services are implemented by a coder. Even in high tech products or self services, the implementation layer involves thinking about and designing processes, organization, roles, etc. in low tech service the whole experience is conveyed through them. Thus designing a service that doesn’t have the rhythm of events, like a graphical interface, is very hard as there is no established way to make a ‘production drawing’ or ‘presentation drawing’ let alone ‘design drawing’ of it.
We need better ways, tools to prototype and think, present and document the service for implementation.