Facebook’s UX is killing the “home” button

Facebook has mastered making users ignore the bottom middle button; one of the most comfortable navigation areas. Look at the diagram below, it’s evident that no matter which hand holds the phone it would have access to the most important area of the app. Instead they choose to use the bottom navigation for the promotion of their own new features which are far from the main use case.

Mark Zuckerberg said yesterday “You might have noticed all of the cameras that we’ve rolled recently” but didn’t share any usage statistics about them.

Let’s quickly dip into some examples:

Facebook

In Facebook no one looks at the Buy section. This is a total mystery. I wonder who uses it? Seriously I don’t know anyone. Then there’s Facebook Stories, which offers nothing and as far as I’m aware are used by few people, yet they keep insisting on having it there.

The bottom navigation points at a left to right priority. Feed first (main use case), Adding friends (rare case), Marketplace in the most important position (which no one use), Notifications (probably the most used button on them all), Settings.

Instagram

In Instagram people don’t take pictures, they just add pictures they took on their camera app. It’s mainly because in the regular camera you have more options and it’s better to edit and then select from your gallery.

The bottom navigation is similar yet without text subtitles for the icons: Feed (main use case), Explore (quite popular), Take a photo (main thing Facebook whats you to do), Notifications, Settings.

Facebook Messenger

In Messenger no one uses the My Day camera feature.

The bottom navigation has a huge blue button that is laid out uncomfortably on top of your friends’ names and competes with the blue of the active tab. Priority is: Home-Explore + conversations (main use case), Calls (less used feature), Camera (let’s make it big so people use it), Groups (which has its own Facebook app), People (your contacts).

WhatsApp

In WhatsApp no one uses the camera button.

In WhatsApp all of the priorities shifted. Status is not being used by anyone of my contacts, Calls (very popular use case), Camera (no one uses), Chats (main use case), Settings.


What do users want?

For such a huge company it’s embarrassing. It feels as though someone there doesn’t listen to the designers.

Where users really look — http://uk.businessinsider.com/eye-tracking-heatmaps-2014-7?r=US&IR=T

Let’s look at the core purposes of these platforms and my critique

Facebook — What do we use Facebook for? Reading news, asking questions to a community you care about, complaining, raising money, and stalking people you don’t really talk to.

Hence it’s completely not aligned. It seems like Facebook hasn’t decided what its main purpose is, news or people’s lives. It’s ok to try and have both and it kind of worked for a while in the feed, but once there are creation options that are based on both it just gets very confusing.

Messenger / WhatsApp — We use both of these for talking to friends.

Adding the camera is important for an easy way to share images in the chat, but for that button to create a story and have a global chat makes it feel like it’s introducing a new usage for these platforms. People usually talk to individuals or groups, not to everyone, and especially not to every contact they have. There isn’t even a proper way to target specific groups. In the use case of Snapchat the whole experience is extremely curated. You could argue that you can slightly curate in Facebook / Messenger, but in WhatsApp there is zero curation. In which case, who is this feature aimed at? Who realistically is going to use it, and who’s going to engage with the stories they share?

Users mainly look at the middle and it is the easiest area for their fingers to touch on

Instagram — A platform to follow your interests and share your life.

Stories is aligned with the initial purpose of Instagram, allowing people to only share photos they’ve just taken rather than using pre-photographed images. Providing a way for sharing photos from a user’s gallery really changed Instagram and in some ways it allowed professionals to outshine regular users with tools and talent. Stories offer another way of sharing which equalizes the field and has helped bringing back the originality and integrity of the content. However, since Snapchat allows you to upload pre-made video, I suspect all of the Facebook story platforms will allow it in the future, and then again this feature will lose its originality and integrity.

The bottom navigation mess across Facebook’s apps = no consistency + changing hierarchies

Conclusion

I love Facebook and all of their acquired companies. Each one is really good on its own. WhatsApp for the older generation, it’s light and fast. Messenger for a nicer, almost the nicest (second to Telegram) chat experience. Instagram for the amazing creation that keeps happening there. Facebook for the ability to see what my friends care about and participate in groups.

Now there is a something that has been tried in all of these apps without any thought about consistency or the advanced usage of their ecosystem. It’s a shame. The middle button / middle tab was supposed to be one of the main points of focus of the app, the core use case. But Facebook is telling us it’s not. In fact it seems like there is no logic behind the order in which menu items are sorted. Ultimately that diminishes the user experience, making it far from ideal.

Subtitles were never designed. The missing element in TV typography design

In the past three years I’ve been designing televisions and audio systems for Samsung. Throughout these years I found a problem that no one tried to tackle. It’s not a sexy problem but it’s so essential that it’s a crime not to fix it. Unbelievably we are talking about typography in Subtitles.

All major companies take their fonts very seriously. They design it to match print and digital. They produce different weights dedicated for specific usages. Finally, there are guidelines on when to use different sizes and colors. There are also fonts for Television but subtitles are somehow neglected.

Source: Giphy

Software companies are not the only ones that had neglected subtitles, TV content companies also neglected them. It seems that this such important piece of design is an afterthought or just a blame that is being thrown away from software companies, to content creators, and hardware manufacturers. This is from the BBC website “Subtitle fonts are determined by the platform, the delivery mechanism and the client as detailed below…subtitles cannot be accurately determined when authoring”

Let’s have a look at a few Subtitles experiences:

Apple — Even in the human interface guidelines there is not much about subtitles, just the accessibility options that do not include

BBC

BBC Screenshot: Grand designs

Amazon — has nothing in their guidelines about subtitles beside the ability to turn them on and off.

Amazon Website subtitle options (screenshot)
Amazon video app subtitles options (screenshot)
Amazon Fire TV subtitle options firetvblog.com

Netflix — Loved their answer to why subtitles are important “Subtitles and closed captions have historically been deemed a secondary asset. Even our own technical specification lists subtitles and closed captions (timed text) among several types of secondary assets that NETFLIX requires. The reality is that for millions of our members around the world, timed text files are primary assets.”

Too bad it’s not reflected in the design of it.

Netflix default subtitles (screenshot)
Netflix app subtitles options (screenshot)

Google — Google has lot of options and they are the only ones that allow font change, but still non of these fonts were designed for TV.

Google Play Movies app subtitle options (screenshot)
All of Google’s subtitle’s options

TV fonts special characteristics

Distance — Unlike other platforms the distance between the observer and the TV is the most static of all. Reading printed materials can be held in different distances and angles which affect the size and light emitted on the paper. On the mobile, the distance also changes depends on interaction and other elements that appear on screen.

TV is static, the subtitles are always at the same bottom area of the screen, and the distance also mostly remains the same. Of course, there are different distances people choose in their living rooms and sizes of TV, but that’s what accessibility settings are for. The TV size should immediately affect the size of the subtitles and it should be something that is adjusted as a part of setting up the TV.

Color — On TV the background is constantly changing. Therefore throughout the years, there were two main colors that were featured as default in subtitles: yellow and white. These two colors are less common on TV. It’s not easy to shoot things in yellow or pure white (unless it’s a scene in the snow). However, since the background keeps changing it is impossible to rely on one color or alternatively keep changing a color when its background changes. The next two solutions came as an addition to the color change.

Background — Many subtitles have text and behind it, there is a background. This helps to differentiate the font from the moving background while helping Readability and clarity. However, background is not the first choice most services do.

Stroke — I find it somehow strange that stroke is the most common way to make sure the text is visible in subtitles. The main reason is that fonts are not designed to have a stroke, they design to work as is. When a stroke is applied on a font it’s eating out of its fill or alternatively out of the negative space. That makes it much less readable. The most common usages of stroke are also featuring heavy contrast like Black contrast on white or yellow fill.

Even on an S the easier less complicated letter the stroke definitely destroying the character’s design

*In weird cases, few companies decided to place a shadow on the colored font. That’s terrible.

Speed — With subtitles it is essential to understand that the users don’t have all the time in the world to read them. In some languages, speech is faster or words are longer. In English, the average time for reading a subtitle line is 3 to 5 seconds. Therefor readability here is even more critical.

Source:Giphy

Multi-language — Mainly in countries that don’t dub TV shows there is a need to subtitles that are in a different language to the original movie. Some countries’ subtitles appear in two lines: First line is translated to the main country language and the second to a second language. In Israel for example, subtitles appear as Hebrew on the first title and Russian or Arabic on the second line. One can only imagine that this makes it extremely difficult to read if there is no consistency in the attributes between the two. There are fonts that are designed specially to match well with another language fonts. This should also be applied in TV.

The extra step

Throughout my exploration, I couldn’t find out where or who dropped the ball on this issue but I think it’s also a great opportunity. Companies have worked so hard creating and owning their own user experience across platforms. Especially now when linear channels ebbing and on-demand services are on the rise it’s time to invest and create an inclusive experience.

Typography is amazing and creating fonts for titles or posters is fabulous. Creating for readability takes longer and it’s less sexy but it provides an enormous value that lays in comfort rather than astonishing. Additionally, it’s fame for life across all of a provider’s services.

A font that takes into account stroke, colors and different backgrounds for speed reading on TV. A translation of the OS font to a proper subtitles font. I am hoping for a better, readable TV experience.

The world of subtitles is fascinating and there was incredible work done around it in terms of language, sentence design and following flow and content integrity. But after all this somehow the fonts had been neglected.


If you are interested and want to read more about subtitles try these:

BBC http://bbc.github.io/subtitle-guidelines/#presentation

The most detailed one.

Apple Tv fontshttps://developer.apple.com/tvos/human-interface-guidelines/visual-design/

Not speaking about subtitles at all, just fonts

Google Material designhttps://www.google.com/design/spec-tv/style/typography.html#

Not speaking about subtitles at all, just fonts

Netflix https://backlothelp.netflix.com/hc/en-us/articles/215758617-Timed-Text-Style-Guide-General-Requirements

There are no design consideration, just structure and timing.

Reimagining storage

Storage is everywhere

Look at your house, half of the things there are for storage. Sometimes there is storage for storage like a drawer for small pots that are placed inside pots. In the computer science world they took this metaphor and used it in order to help people understand where things are. Throughout time, new metaphors came like search and ephemeral links. When I was 21 I taught a course called “Intro for computers” to the elderly community, in it I remember explaining how files are structured in folders. I could see how the physical metaphor helped people understand how digital things work.

Nest Storage by Joseph & Joseph (Bermondsey, London)

Unlike the organisation of physical objects in a house, it’s harder to anticipate where things will be in a computer if you weren’t the one who placed them there. Then search came, but still nowadays search is somehow so much better for things online in comparison to our own computers. A lot of research has been done about content indexing for the web. Obviously it was driven by search engine optimisation and the business benefits you get if you reach the top three in the search results page. However, people are not familiar with these methods and don’t apply them to the way they work with files on their devices.

In iOS, Apple chose to eliminate the finder and files in general. Apps replaced icons on a desktop, the App Store replaced thousands of websites and the only folders we actively access on the phone are gallery, videos, and music. Even Android concealed the folders and through Share menus users can probably avoid the transitions that were a norm on computers.

How do we deal with files nowadays?

  • We sometimes look them up via search
  • We put them in folders, in folders, in folders
  • Files sometimes come as attachments to emails or messaging apps.
  • We (if advanced users) tag them using words or colors

So here are the problems:

  1. Search — It exists but indexing needs to happen automatically.
  2. Self organisation — lacking and not automated in any way.

A new horizon for our contents

Companies see value in holding on to user data, it’s of course stored on the cloud. There, companies can compare, analyze it, do machine learning, and segment it for advertising. Their ability to centralize things helps them to give us a smoother experience by creating an ecosystem.

But storage isn’t just a file or a folder. It has its physical presence, as a file but it could also be an object in real life, a part of a story, a timeline with interactions, challenges and emotions. How can we rate the importance of things and interlink them into past events? How can people relive their day a while ago, black mirror style.

The average person has x amount of files. That is probably x photos, x videos, etc. For many of these there are dedicated software. For music you have players, for photos you have galleries with new search features and event creating features, for video you have your library or Netflix. But what about the rest of the files? What about you Excel sheets, your writings, your projects?

Think of the revolution that happened in the past couple of years around photos. We started having albums, then stories and events. In addition to that we have the elements of sharing, identifying who is there, conquering the world with our pins. In addition to that we have another layer of filters, animations, texts, drawing, emoji. All that is even without speaking about video which needs to adapt these kind of paradigms to create meaningful movies…it’s nicer to share.

If we are looking at the work area there are products that try to eliminate the file by opening different streams of information like Slack or hipChat.

The one thing that these products still lack is the ability to convert them all to a meaningful project, which is attempted by project management software. Project management software tries to display things into different ways which helps covering use cases and needs. However, still most of its innovation is around aggregation, displaying calendars, tasks and reports. Things get complicated when tasks start having files attached to them.

What kind of meaningful file systems can we create?

 

Imagine the revolution that happened for photos happening for files. Photos already have so much information linked to them like Geolocation, faces, events, text, and the content in it. We finally have pieces of software that has the ability to analyse our files. A good example is Microsoft LipNet project that extracts subtitles from moving lips in video. Imagine the power of AI working on your files. All these revisions, all these old stuff, memories, movies, books, magazines, pdfs, photoshop files, sheets and notes.

How many times have you tried to be organised with revisions, your savings of docs? How many times you’ve created a file management system that you thought would work? Or tried a new piece of software to run multiple things but soon after realised it doesn’t work for everything? It is a challenge that hasn’t been solved in the work environment or the consumer environment.

Some products that are focused on the cloud started implementing seeds of this concept. It usually comes in the form of version control and shared files. It is amazing that people can now work collaboratively in real time. However I think it will spiral to the same problem. There are no good ways to manage files and bring them together into meaningful contextual structure that could once and for all help us get rid of nesting and divided content.

In terms of users navigation there are currently five ways to gain content on the web:

  1. Search — have a specific thing you’re look for
  2. Discovery — get a general search or navigation and explore
  3. Feed — Follow people
  4. Message — get pushed from outsiders
  5. URL dwelling and navigation — repeated go in and out

But on the computer with your own creation there are only two

  1. Search — for specific file and then maybe it is associated by folder
  2. Folder dwelling — No discovery, just looking around clumsily for your stuff and trying to create a connection or trying to remember.

It’s an early age for this type of service offers

There are some initial sparks of innovation around systems that simplify files. For example looking at version control in Google Drive or Dropbox. The idea that a user can work on the same file with the same filename without even thinking they will lose important information gives comfort. No more file revisions can be a good step in helping us find things in a better way.

Code bases like Github also help control projects with versions control that allows collaborative work, and merging of branches. That is also a good step since it proves that companies are thinking on file projects as a whole. However it still can’t help creating contextual connections or meaningful testing environments.

Final note

AI systems are finally here and the same thing they do on the cloud they can do in our files. We can maybe finally get rid of files the way we think of them and finally be organised, contextual and god forbid, the structure doesn’t have to be permanent, it can be dynamic according to the user needs and the content that is being accumulated every day.

Voice assistance and privacy

Voice assistants technologies are hyped nowadays. However one of the main voiced concerns is about privacy. The main concern about privacy is that devices listen to us all the time and document everything. For example, Google keeps every voice search users do. They use it to improve its voice recognition and to provide better results. Google also provides the option to delete it from your account.

A few questions that come to mind are: how many times do companies go over your voice messages? How often do they compare it with other samples? How often does it improve thanks to it? I will try to assume answers to these questions and suggest solutions.

A good example for a privacy considered approach is Snapchat. Messages in Snapchat are controlled by the user, and they also disappear from the company’s servers. Considering the age target they aimed for, it was a brilliant decision since teenagers don’t want their parents to know what they do, and generally, they want to “erase their sins”. Having things erased is closer to a real conversation than a chat messenger.

Now imagine this privacy solution in a voice assistant context. Even though users aspire the AI to know them well, do they want it to know them better than they know themselves?

What do I mean by that? Some users wouldn’t want their technology to frown upon them and criticize them. Users also prefer data that doesn’t punish them for driving fast or being not healthy. This is a model that is now led by insurance companies.

Having spent a lot of time in South Korea I have experienced a lot of joy rides with taxi drivers. The way their car navigation works is quite grotesque. Imagine a 15-inch screen displaying a map goes blood red with obnoxious sound FX in case they pass the speed limit.

Instead, users might prefer a supportive system that can differentiate between public information that can be shared with the family to private information which might be more comfortable to be consumed alone. When driving a car, situations like this are quite common. Here is an example — A user drives a car and has a friend in the car. Someone calls and because answering will be on the car’s sound system the driver has to announce that someone else is with them. The announcement is made to define the context of the conversation thus to prevent content or behaviors that might be private.

The voice assistant will need to be provided with contextual information so it could figure out exactly what scenario the user is in, and how / when to address them. But we will probably need to let it know about our scenario in some way too. Your wife can hear that you are with someone in the car but can’t quite decipher who with. So she might ask “are you with the kids?”.

Voice = social

Talking is a social experience that most people don’t do when they are alone. Remember the initial release of the bluetooth headset? People in the streets thought that you are speaking to them but you actually were on the phone. Another example is the car talking system. Some people thought that the guy sitting in the car is crazy because he is talking to himself.

Because talking is a social experience we need to be wary of who we speak to and where; so does the voice assistant. I know a lot of parents that have embarrassing stories of their kids “blab” things they shouldn’t say next to a stranger. Many times it’s something that their parent said about a person or some social group. How would you educate your voice assistant? By creating a scenario where you actively choose what to share with it.

Companies might aspire to get the most data possible, but I doubt that they really know how to use it. In addition, it doesn’t correspond with the level of expectations that consumers expect. From the users perspective, they probably want their voice assistant to be more of a dog, than a human or a computer. People want a positive experience with a system that helps them remember what they don’t remember, and that forgets what they don’t want to remember. A system that remembers that you wanted to buy a ring for your wife but doesn’t say it out loud next to her, and reminds you in a more personal way. A system that remembers that your favorite show is back but doesn’t say it next to the kid because it’s not appropriate for their age.

A voice assistant that has Tact.

Being a dog voice assistant is probably the maximum voice assistants can be nowadays. It will progress but in the meantime, users will settle on something cute like Jibo that has some charm to it in case it makes a mistake and that can at least learn not to repeat it twice. If a mistake happened and for example, it said something to someone else, users will expect a report about things that got told to other users in the house. The Voice assistant should have some responsibility.

Mistakes can happen in privacy, but then we need to know about it before it is too late.

Using Big Data

The big promise of big data is that it could globally heal the world using our behavior. There is a growing rate of systems that are built to cope with the abundance of information. Whether they cope or not is still a question. It seems like many of these companies are in the business of collecting for the sake of selling. They actually don’t really know what to do with the data, they just want to have it in case that someone else might know what to do with it. Therefore I am not convinced that the voice assistant needs all the information that is being collected.

What if it saved just one day of your data or a week, would that be contextual enough?

Last year I was fascinated by a device called Kapture. It records everything around you at any give moment. But if you noticed something important happen you can tap it and it will save the previous 2 minutes. Saving things retrospectively, capturing moments that are magical before you even realized they were so, that’s incredible. You effortlessly collect data and you curate it while all the rest is gone. Leaving voice messages to yourself, writing notes, sending them to others, having a summary of your notes, what you cared about, what interested you, when do you save most. All of these scenarios could be the future. The problem it solved for me was, how can I capture something that is already gone whilst keeping my privacy intact.

Kapture

Social privacy

People are obsessed with looking at their information the same as they are obsessed with looking in the mirror. It’s addictive, especially when it comes as a positive experience.

In social context the rule of “the more you give the more you get” works, but it suffers in software. Maybe at some point in the future it will change but nowadays software just don’t have the variability and personalization that is required to actually make life better for people who are more “online”. Overall the experience is more or less the same if you have 10 friends in Facebook or 1000. To be honest it’s probably worst if you have 1000 friends. The same applies to Twitter or Instagram. Imagine how Selena Gomez’s Instagram looks like. Do you think that someone in Instagram thought of that scenario, or gave her more tools to deal with it? Nope. It seems like companies talk about it but rarely do about it and it definitely applies to voice data collections.

It seems clear, the ratio of reveal doesn’t justify or power the result users get. One of the worst user experiences that can happen is for example signing into an app with Facebook. The user is led to a screen that requests them to grant access to everything…and in return they are promised they could write down notes with their voice. Does it has anything to do with their address, or their online friends, no. Information is too cheap nowadays and users got used to just press “agree” without reading. I hope we could standardize value for return while breaking down information in a right way.

Why do we have to be listened to every day and be documented if we can’t use it? Permissions should be flexible and we should incorporate a way to make the voice assistant stop listening when we don’t want them to listen. Leaving a room makes sense when we don’t want another person to listen to us, but how will that look like in a scenario in which the voice assistant is always with us? Should we tell it “stop listening for five minutes”?

Artificial intelligence in its terminology is related to a brain but maybe we should consider its usage or creation to be more related to a heart. Artificial Emotional Intelligence (A.E.I) could help us think of the assistant differently.

Use or be used?

How does it improve in our lives and what is the price we need to pay for it? In “Things I would like to do with my Voice Assistant” I talked about how useful some capabilities would be in comparison to how much data will this action need to become a reality.

So how far is the voice assistant from reading emotions, having tact and syncing with everything? Can this thing happen with taking care of privacy issues in mind? Does your assistant snitch on you, or tell you when someone was sniffing and asking weird questions? It’s not enough to choose methods like differentiated privacy to protect users. Companies should really consider the value of loyalty and creating a stronger bond between the machine and the human rather than the machine and the company that created it.

Further more into the future we can get to these scenarios:

There could also be some sort of behavioral understanding mechanism that mimics a new person that just met you in a pub. If you behave in a specific way the person will probably know how to react to you in a supportive way even though they didn’t knew you before. In the same way a computer that knows these kind of behaviors can react to you. Even more assuming there are sensors that tells it what’s your physical status and recognize face pattern and tone of voice.

Another good example are Doctors that many times can diagnose patients’ disease without looking at their full health history. Of course it’s easier to look at everything, but they would usually do that in case they need to figure out something that is not just simple. When things are simple it should be faster and in the tech’s case more private.

Summary

There are many ways to make Voice assistants more private whilst helping people trust them. It seems like no company has adopted this strategy yet. It might necessitate that this company would not rely on a business model that is driven by advertising. A company that creates something that is being released to the wild, a machine that becomes a friend that has a double duty for the company and the user, but one that is at least truthful and open about what it shares.

The UX Poet

 

Not too long ago I had an experience that made me look differently on the way I use words. We were holding a workshop with colleagues from Korea and USA. Everything we’ve planned went well and the responses were good. Then we took them for dinner and drinks. A number of beers later, the lead Ux designer of the American team disclosed to me that he thinks I’m using altitudinous words in my presentations. He mentioned that at times they were dazed by the vocabulary. The others agreed and said that they had to go to the dictionary to figure out the exact meaning of a word. We laughed about it. They said I do UX poetry.

It was a good time to start explaining to my colleagues: “I have a confession, in my past, I used to be a rapper”. Everyone’s like “wow”…to counter all of the trillion preconceptions that just bounced into their head I ask “do you know Aesop Rock?…no…how about Sage Francis, Buck 65…maybe Talib Kweli / Mos Def?” then usually I get one “yes”. That was the kind of hip hop I tried to do.

from the amazing project: poly-graph.co/vocabulary.html

The point is: I’m in love with language, structure, words, rhymes and their meanings. When I was a child I spent hours going through rhyme books, dictionaries, philosophers. I love reading Zizek and going through the same sentence 8 times and maybe get it, or listen to Ghostface Killah and work my way through the slang. I love reading poetry and I also love wowing people with pompous words.

These rappers are less popular. Who reads philosophy nowadays? What does it have to do with UX?

Defining things in UX is crucial. Since the UX discussion is focused on users’ emotions it is eminent to describe it to the best of our ability. Vocabulary shouldn’t be compromised in presentations. The meaning of better communication is to say exactly what you meant and then if needed support it with simpler words.

There must be a parity between an eloquent text and the speaker’s elocution.

Importance of words

It’s never simple to simplify and to find the essence of a “thing”. It’s harder to constrain an emotion into a sentence. When dealing with UX we analyze human behavior and try to use pre-made experiment assumptions and methods to observe and extract meaningful insights. Being able to analyze behaviors require patience and the ability to just facilitate and empathize. Documenting it entails removing preconceptions, ego, and judgmental obstructions. Analyzing it make assumptions rise again through natural comparisons of people for example.

Everything ends up in text or visual format

The process involves dozens of tasks with each of them ending in a written output. The way they will be written and presented will dictate how serious the output will be treated and how will it be absorbed by the rest of team. When it’s all done, written and shared, the creator has to live with it for a while, it’ll turn into a creation condition. These words become the mainstay in which the design ship will be built upon. The definition of the user and the problems will find themselves to the people who are less involved directly. It will be a seed that stays in their head and will grow the business unrestrainedly and unsupervised.

Users are not always right and therefore testing is just the beginning of the process. On top of that experience, a solution is being developed. Every report needs to end with “next steps”. Paragraphs that describe potential solutions should inspire using a vision, maintain simplicity and reveal the road to the target.

Controlling the means of expression means better control of the process.

When reading poetry people’s feelings diverge. I try to create a scenario where everyone has the luxury to think different but eventually it feels the same.

Importance of keywords

Behind every simplicity, there is hidden complexity and the same applies to keywords. Each of them is the key to a passage of information. Eventually, they all end up in the same space, interacting with each other and creating the experience.

Keywords are pillars for the memory of the listeners and if the storytelling and weaving process is done, strong connections can be formed around your designs. A design is not only making per say, it is also communicating. We communicate it to people we present to, people who will read it later, people who don’t have time and will just skim through the pages. When we communicate there are infinite cases to cater and think of.

Be a diplomat when you co-work. Be a poet, strife and ferocious when reaching the conclusion.

UX poetry is your chance to make a difference in a more personal way. Don’t get things diffuse by misunderstandings; write and design the future by any mean of expression, and make it eloquent.

Proving your Design

Over the years I have had more experience working with developers than with designers. However in the past two years I have been more involved with managing and creating design. One of the key goals I had was to structure the processes that would allow me to prove my designs.

Design is not an exact science, but it still has rules. That means that there is a way of creating designs, but no definitive way of knowing in advance whether they will be right for your purposes. I believe that there are tools and processes we can use to increase the probability of designs being fit for purpose and, just as importantly, of convincing others that your design is the right one.

Tools for proving design

Trends

Trends give an overarching view of where people, industry, designers and technology are heading. Trend research usually collates the past two years of an area. All trends start as a seed of inspiration. What you’re doing with your research is tracking the development of that seed to see if it grows into a trend. From the data you gather, you can create a trends report that groups the data in meaningful ways. As well as helping you see where your design fits in current trends, it can also be used to remind stakeholders of things they’ve seen, while reassuring them that you’re considering the wider market and not designing in a vacuum.

Measurements and evaluation

Being patient and focused are rare traits in designers. There is always this drive to change and inspire, to revolutionize to make something interesting again. However, it is extremely important to harness that creativity for critical and incremental development too. Reflecting on your design, testing and measuring it is essential for proving its value to others. To be able to prove a new concept you should measure the previous one, or if it’s completely new, measure it in comparison to other similar concepts.

You can measure design by conducting user testing, focus groups or even guerrilla testing internally. If the measurements to which you test are agreed and respected by the stakeholders it gives your design substantial support.

Experts

To gain credibility for your design you can’t just settle for internet-based research. For example judging a product review on an app has very limited information. Talking to well-known experts will allow you to learn from people who founded the industry and their name can lend credibility to your design, especially if your stakeholders have heard of them. Moreover these experts or advisors cycle through many companies and often have a good sense of what is happening overall. Experts shouldn’t be just design experts, they could be experts in technology, strategy, marketing, or any field relevant to your product.

Benchmarking

Benchmarking is an activity we do naturally all the time. We always compare our product to others and sometimes the grass looks greener on the other side. From my experience it looks greener when we don’t thoroughly understand the strategy or refuse to acknowledge the strengths and weaknesses of our workplace.

Keep a catalog of things that interest you and try to cluster and compare to see improvements and direction. Be mindful of the limitations it imposes on your mind, not everything should be about catching up with competitors. The fact that the market hasn’t done something already doesn’t mean you’ve identified a golden opportunity, it just means that you ought to find the reason it’s not been done already and then see if it matches up with your company’s strategy.

Strategy

Whether you create or rely on strategy, it is always important to understand it and interpret it in a way that will show links between it and your design. Strategy usually relies on knowing the current situation, the goal and how to get there. The change log is very valuable for this purpose –  track your competitors and try to understand their strategy and then use that to your advantage.

History/Company DNA

Looking at the history of your company is extremely important. Know the past to learn for the future. Somewhere there might be a database of useful information about success stories and failures. The faster you understand how the company gained its success, the more quickly you’ll understand if your direction is aligned with theirs. Be mindful of politics; you might present something that has already been tried and rejected by stakeholders.

Co-Design

Designing together helps gain support on the ground and puts the design suggestion under multiple lenses. It is also essential to help you learn more about the company. The more communication and the more knowledge that goes into a design, the better it will be.

Summary

Using these tools is not enough to convince stakeholders though, you’ve got to tell a story. Using these methods could be tricky. You might realize you’ve got the problem right but not the solution, or there might be contradictions between the results you get when testing. When weaved into a compelling story you allow your client to focus on the narrative.

A good designer breaks the product and its context to bits, make sense of them, looks at them through a different lens and then reconstructs the product to make it better.

Communicating that process to stakeholders is important when proving a design. It gives them the why behind the what and often that understanding is what you need to gain support.

Thank you for everyone that helped me and advised me about this post: Carlos Wydler, Oded Ben Yehuda.