Top 9 Talks You Should Go See at Voice Summit 2019
July 10, 2019
With over 280 sessions to go see between July 22-25, we decided to create a top 9 list of sessions you should check out if you’re attending Voice Summit 2019, the largest Voice event in the world.
Tuesday, July 23 Sessions
Why Conversational Interfaces Are Designed to Fail
Where: CKB 303 — 1:00 PM to 1:45 PM
Who: Mark Webster, Director of Product, Voice UI/UX, Adobe
Summary: The majority of voice interfaces are experienced as assistants whose goal is to appear as “human” as possible. Siri, Alexa, and Cortana all have names, genders, and the same inflections and intonations as real people. They even tell us jokes. As creators, we’re approaching voice experiences as conversational, and we talk of voice being a “natural” medium, liberated from the constraints and conventions inherent in other interfaces. This is the wrong approach, destined for failure and holding back true ubiquity for voice-first user experiences.
In this session, Mark Webster will discuss how we’ve been ignoring the lessons user experience design has to teach us about each time a new form of digital interaction has been introduced, why voice isn’t much different, and what we all should do to overcome the hurdles surrounding voice and design, outlining potential solutions to challenges in the market and what that means for the future of voice.
Deceptively Simple – Designing a Voice Experience
Where: CTR 240 — 2:00 PM to 2:45 PM
Who: Paul Jackson, Senior UX Designer, BBC Voice + AI
Summary: How do you design a voice experience for an audience that is still learning to speak? That was the unique challenge faced by the Voice + AI team at the BBC.
Over the past year, I’ve lead the design of a voice experience that aims to be immersive and engaging, yet simple enough for a three-year-old to use. The final result – the BBC Kids Skill – hides a wealth of complex design decisions behind a child-friendly exterior. I’ll share how the BBC approached this unique challenge, why we chose to focus on this audience and what we learned about designing for voice in the process.
I’ll explore the unique considerations that designing a voice experience for children demands and demonstrate the solutions that we’ve employed to provide a usable experience. Participants will learn about the universal voice design principles that underpin the BBC Kids skill. Through a series of illustrated examples, I’ll show that by understanding how to design a voice experience for children you’re understanding how to design a voice experience for anyone.
Refining UX research to create compelling, contextual voice interface experiences
Where: CTR Ballroom A — 4:00 PM to 4:45 PM
Who: Moderator – Lesley Palfreyman, Sr. User Experience Designer & Researcher, Amazon Music, Panelists – Chris Geison, Principal UX Researcher, AnswerLab; Marli Mesibov, VP, Content Strategy, Mad*Pow; Mave Houston, Senior Director, User Research, Audible; Hilary Hayes, Senior Design Researcher, Connected; Kyree Holmes, UX Researcher, Comcast
Summary: Creating engaging voice interface experiences begins with understanding your audience. Learn what UX research is, how it’s used to inform strategy and design, and how to conduct it yourself or work effectively with professional UX researchers.
Editor’s Note: Our one and only Hilary Hayes will be speaking at this session!
Wednesday, July 24 Sessions
Now We’re Cooking With Gas!
Where: CTR Atrium — 9:00 AM
Who: Franklin Lobb, Sr Solutions Architect, Amazon Alexa
Summary: AWS Lambda is a core AWS service used by Skill Builders when creating their Alexa Skill experience. However, once you move beyond the basic use cases, there are many other AWS services which can be used to improve and extend the functionality, performance, security and automation of your skills. Come learn which services apply to which problems you’re trying to solve, and you too will be cooking with gas!
Three things we learned building the Hey Mercedes Voice Assistant
Where: CKB Strategy Lab — 1:00 PM to 1:45 PM
Who: Robert Bruchhardt, Senior Software Engineer, Mercedes-Benz Research and Development North America
Summary: We are building the Hey Mercedes in-car voice assistant out of Silicon Valley. In this session, we want to share three of the things we learned while doing so. The focus will be on automotive-specific voice assistant development, but the discussed issues can easily be applied to other areas and applications as well. Join us to discuss voice applications in the mobility and automotive area or simply if you want to discover similarities and differences to other voice assistant applications.
The Voice Playbook – Your foolproof guide to building best-in-class voice products
Where: CKB 217 — 4:00 PM to 4:45 PM
Who: Polina Cherkashyna, Product Manager, Connected; Hilary Hayes, Senior Design Researcher, Connected
Summary: If you are working for an established brand and you are looking for a successful entry into the voice-first era this talk is for you. This talk will provide you with a framework, which will help identify if voice is a necessary medium for your business, successfully identify concrete opportunities you need to pursue and share voice-specific techniques and from-the-trenches examples of how to de-risk voice product development and bring a successful product to market. Each step will be illustrated with a real-life case study and data. At the end of the workshop participants will be provided with the Voice Playbook to take back.
Editor’s Note: Meet our Connectors Polina and Hilary at this exciting session!
Thursday, July 25 Sessions
What If You Don’t Have a Voice 2.0?
Where: CKB 204 — 1:00 PM to 1:45 PM
Who: Thibault Duchemin, CEO, Ava; David Heafitz, Vice President, Prudential Financial; Thomas Chappell, Sr Assoc, Infra Sysdev, Prudential; Diane Hettinger, Director Health and Wellness, Prudential Financial
Summary: Planning to create a voice application and you need to support both Alexa and Google Actions? What are the similarities and differences in their stacks, security models and hosting architecture? What are the most important technical considerations before designing a voice-first, cross-platform experience? Learn from Microsoft Alums and AWS Certified Skill Builders how to overcome the challenges in a multi-platform environment – from design to implementation. Take home tips and tricks you can use for your next project.
Addressing Bias in Artificial Intelligence & Voice Assistants
Where: WEC Main — 1:00 PM to 1:45 PM
Who: Darlene Gillard, Director of Community, Founding Team, digitalundivided; David Yakobovitch, Enterprise Data Scientist, Host HumAIn Podcast, Galvanize; Ed Doran Ph.D., Director of Product Management, Microsoft Research; Jen Heape, CCO & Co-Founder, Vixen Labs; Xango Eyee, Master Technical Architect, Emerging Technologies, Google
Summary: How can we ensure that increasingly ubiquitous voice assistants, skills and apps are designed for everyone and that harmful bias isn’t inadvertently institutionalized and spread through AI/machine learning?
The Future of Human Computer Interaction & AI Generated Characters
Where: CKB 204 — 2:00 PM to 2:45 PM
Who: Justin Hendrix, Executive Director, NYC Media Lab
Summary: In the near future, characters and entities will emerge that are animated by artificial intelligence. Interactions with these AI characters will produce highly personalized experiences. In this panel, we’ll consider the state of the art for developing AI interactions, the role of natural language processing and generation, and techniques for developing voice interactions with a focus on what is now possible, and what gaps exist for developers, researchers and entrepreneurs who seek to build solutions.
Will you be at Voice Summit? Be sure to look out for our Connectors and tweet us which sessions you’ll be attending!
Subscribe to Our Newsletter
Join the Connected newsletter list to receive curated content that exemplifies our Product thinking approach.
Wed Dec 2
Experiments in the Body Language of Ambient Computing
Senior Product Designer, Tim Bettridge takes you through a project by Connected Labs—Connected’s dedicated R&D function —that explores the applications of computer vision in developing more intuitive and frictionless, conversational interfaces.
Tue Aug 4
Erasing the Boundaries of Domain-Specific Conversational Agents
Our internal R&D team, Connected Labs, recently set to work creating a new blended approach to building conversational agents that are more social and flexible, while still retaining functional utility and purpose. Read our best-in-class Product Designer, Tim Bettridge's perspective on the project and how this work could transform how we interact with these agents.