Video Calling for Autism explores how we might employ cutting edge innovations in computer vision and AI to help make video calling more comfortable to autistic users. Autistic people often have challenges conveying and reading facial expressions during conversations. In video calls, they sometimes have difficulty knowing when their facial expressions are appropriate for the sentiment they wish to convey; when a mismatch occurs, conversations can go badly, potentially misleading or angering their conversation partners. We use technologies from MSR AI to analyze an autistic person’s facial expressions (as seen from the video camera), classify them into basic emotions, and then render this in real time as emojis on their screen. Giving users live feedback in a video call about how their own facial expressions might be interpreted can help them ensure that they are correctly expressing their meaning and improve conversations while using Teams, Skype or other video calling systems.