ACTIVE LISTENING : THE SAY WHAT app
How might technology allow people to listen to each other during a conversation or during an argument?
The Say What app is a gauge, a visual representation of the flow to signify that which otherwise would be lost in thin air. Speaking about the same subject in a constructive manner will merge the participant's shapes and begin to create a luscious, hybrid shape. On the contrary, in the heat of an argument, the shapes that represent each participant would disintegrate or look sharp and violent (shown above). The size and complexity of the patterns are a direct result of each persons level of talk verses listening, their anger level, use of swear words.
The robust developments in the field of speech recognition software is the backbone of this app. Devices like Amazon Echo fall into the category known as Conversational User Interface (CUI). Rather than a human to artificial intelligence, the Say What app focuses on the person to person dialogue– visualizing the flow of a conversation in real-time to promote active listening.
By monitoring the small parts of speech–nouns, adjectives, verbs– and the overall tone or intensity, the app creates a visual representation of the conversation at hand. It begins by each person choosing a square, triangle or circle to represent them.
The spoken word is completely invisible however it's effect can leave a lasting effect on us.
The automatic speech recognition (OCR) algorithms cross-references a dictionary, thesaurus, out-of-vocabulary database, and voice-to-text programs in order to provide the most accurate portrayal of the conversation. The form and function of the SayWhat app was developed by iterating multiple wireframes, service diagrams, and a 2x2 competitor matrix.
When this project was made, there was nothing in the market that stood to gauge the quality of peoples conversation. I hope someday soon, this speculative project become a reality and can act as a training tool for people to become great listeners.