{"id":629775,"date":"2020-02-16T21:51:10","date_gmt":"2020-01-07T13:50:59","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/?post_type=msr-event&p=629775"},"modified":"2020-11-15T20:30:28","modified_gmt":"2020-11-16T04:30:28","slug":"workshop-on-speech-technologies-for-code-switching-2020","status":"publish","type":"msr-event","link":"https:\/\/www.microsoft.com\/en-us\/research\/event\/workshop-on-speech-technologies-for-code-switching-2020\/","title":{"rendered":"First Workshop on Speech Technologies for Code-switching in Multilingual Communities 2020"},"content":{"rendered":"

Organizers:<\/strong><\/p>\n

Kalika Bali, Microsoft Research India<\/p>\n

Alan W Black, Carnegie Mellon University<\/p>\n

Rupesh Kumar Mehta, Microsoft<\/p>\n

Thamar Solorio, University of Houston<\/p>\n

Victor Soto, Columbia University<\/p>\n

Sunayana Sitaram, Microsoft Research India<\/p>\n

Emre Yilmaz, National University of Singapore<\/p>\n

Shared Task Committee:<\/strong><\/p>\n

Sanket Shah, Microsoft Research India<\/p>\n

Sandeepkumar Satpal, Microsoft<\/p>\n

Vinay Krishna, Microsoft<\/p>\n

Technical Support:<\/strong><\/p>\n

Rashmi K Y, Microsoft<\/p>\n

Naveen Kumar, Microsoft<\/p>\n

Vyshak Jain, Microsoft<\/p>\n

Nadeem Shaheer, Microsoft<\/p>\n","protected":false},"excerpt":{"rendered":"

Code-switching is the use of multiple languages in the same utterance and is common in multilingual communities across the world. Code-switching poses many challenges to speech and NLP systems and has gained widespread interest in academia and industry recently. We organized special sessions on code-switching at Interspeech 2017, 2018 and 2019. In 2020, we will be organizing this as an online-only workshop co-located (virtually) with Interspeech 2020.<\/p>\n","protected":false},"featured_media":627609,"template":"","meta":{"msr-url-field":"","msr-podcast-episode":"","msrModifiedDate":"","msrModifiedDateEnabled":false,"ep_exclude_from_search":false,"_classifai_error":"","msr_startdate":"2020-10-30","msr_enddate":"2020-10-31","msr_location":"Virtual\/Online","msr_expirationdate":"","msr_event_recording_link":"","msr_event_link":"http:\/\/www.aka.ms\/CSworkshop2020","msr_event_link_redirect":false,"msr_event_time":"","msr_hide_region":true,"msr_private_event":false,"footnotes":""},"research-area":[13545],"msr-region":[197903],"msr-event-type":[210063],"msr-video-type":[],"msr-locale":[268875],"msr-program-audience":[],"msr-post-option":[],"msr-impact-theme":[],"class_list":["post-629775","msr-event","type-msr-event","status-publish","has-post-thumbnail","hentry","msr-research-area-human-language-technologies","msr-region-asia-pacific","msr-event-type-workshop","msr-locale-en_us"],"msr_about":"Organizers:<\/strong>\r\n\r\nKalika Bali, Microsoft Research India\r\n\r\nAlan W Black, Carnegie Mellon University\r\n\r\nRupesh Kumar Mehta, Microsoft\r\n\r\nThamar Solorio, University of Houston\r\n\r\nVictor Soto, Columbia University\r\n\r\nSunayana Sitaram, Microsoft Research India\r\n\r\nEmre Yilmaz, National University of Singapore\r\n\r\nShared Task Committee:<\/strong>\r\n\r\nSanket Shah, Microsoft Research India\r\n\r\nSandeepkumar Satpal, Microsoft\r\n\r\nVinay Krishna, Microsoft\r\n\r\nTechnical Support:<\/strong>\r\n\r\nRashmi K Y, Microsoft\r\n\r\nNaveen Kumar, Microsoft\r\n\r\nVyshak Jain, Microsoft\r\n\r\nNadeem Shaheer, Microsoft","tab-content":[{"id":0,"name":"Summary","content":"Code-switching is the use of multiple languages in the same utterance and is common in multilingual communities across the world. Code-switching poses many challenges to speech and NLP systems and has gained widespread interest in academia and industry recently. We organized special sessions on code-switching at Interspeech 2017, 2018 and 2019. In 2020, we will be organizing this as a virtual workshop immediately after Interspeech 2020.\r\n\r\nWe welcome papers related to, but not restricted to the following aspects of code-switching:\r\n

    \r\n \t
  1. Code-switched speech recognition and synthesis<\/li>\r\n \t
  2. Language Modeling for code-switching<\/li>\r\n \t
  3. Multilingual models for code-switching<\/li>\r\n \t
  4. Data and resources for code-switching<\/li>\r\n \t
  5. Code-switched chatbots and dialogue systems<\/li>\r\n \t
  6. Code-switched speech analytics<\/li>\r\n<\/ol>\r\n*NEW* You can find the proceedings of the workshop here<\/a> and view all pre-recorded talks in the Schedule tab.<\/strong><\/span>\r\n\r\n*NEW* The workshop will be conducted on Microsoft Teams. All registered participants have been sent information about this by email.<\/span><\/strong>\r\n\r\nWorkshop timeline:<\/strong>\r\n\r\nShared task testing period: April 27-29 2020<\/del>\r\n\r\nFirst Paper submission deadline: June 5 2020<\/span><\/del>\r\n\r\nPaper acceptance notification: July 20 2020<\/del>\r\n\r\n1 page Abstract submission deadline for special track: Aug 9 \u00a02020<\/del>\r\n\r\nAbstract and paper acceptance notification (special track and second round): Sept 7 2020<\/del>\r\n\r\nCamera ready papers due (*Both Rounds*): Sept 20th 2020<\/del>\r\n\r\nVideo submission deadline for accepted papers: 9th October 2020<\/del>\r\n\r\nRegistration deadline: 15th October 2020<\/del>\r\n\r\nWorkshop: 30 and 31 October 2020\r\n\r\nContact us:<\/strong>\r\n\r\nPlease write to sunayana.sitaram@microsoft.com<\/a>"},{"id":1,"name":"Schedule","content":"Please note: This is a tentative schedule and is subject to change. All times are in China Standard Time \u200e(UTC+8)\u200e.\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n
    Day 1<\/strong><\/span><\/td>\r\n<\/td>\r\n\u00a0Friday, 30 October 2020<\/strong><\/span><\/td>\r\n<\/tr>\r\n
    Time (CST)<\/strong><\/td>\r\nSession<\/strong><\/td>\r\nSession Chairs<\/strong><\/td>\r\nTitle<\/strong><\/td>\r\nSpeaker<\/strong><\/td>\r\n<\/tr>\r\n
    20:30-21:30<\/td>\r\nOpening remarks and Keynote<\/td>\r\n<\/td>\r\nPoints of connection between linguistics and speech technology with regard to code-switching<\/a><\/td>\r\nBarbara Bullock, Jacqueline Toribio<\/td>\r\n<\/tr>\r\n
    21:30-21:40<\/td>\r\nBreak<\/td>\r\n<\/td>\r\n<\/tr>\r\n
    21:40-21:55<\/td>\r\nPaperS1<\/td>\r\nThamar Solorio and Manuel Mager<\/td>\r\nA Study of Types and Characteristics of Code-Switching in Mandarin-English Speech<\/a><\/td>\r\nLeijing Hou<\/td>\r\n<\/tr>\r\n
    21:55-22:10<\/td>\r\nPaperS1<\/td>\r\nThamar Solorio and Manuel Mager<\/td>\r\nMalayalam-English Code-Switched: Speech Corpus Development and Analysis<\/a><\/td>\r\nSreeram Manghat<\/td>\r\n<\/tr>\r\n
    22:10-22:25<\/td>\r\nPaperS1<\/td>\r\nThamar Solorio and Manuel Mager<\/td>\r\nUnderstanding forced alignment errors in Hindi-English code-mixed speech -- a feature analysis<\/a><\/td>\r\nAyushi Pandey<\/td>\r\n<\/tr>\r\n
    22:25-22:40<\/td>\r\nPaperS1Q&A<\/td>\r\nThamar Solorio and Manuel Mager<\/td>\r\nQ&A<\/td>\r\n<\/tr>\r\n
    22:40-22:50<\/td>\r\nBreak<\/td>\r\n<\/td>\r\n<\/tr>\r\n
    22:50-23:00<\/td>\r\nSponsorTalk<\/td>\r\nMicrosoft<\/td>\r\n<\/td>\r\nBasil Abraham<\/td>\r\n<\/tr>\r\n
    23:00-23:15<\/td>\r\nPaperS2<\/td>\r\nKalika Bali and Khyathi Chandu<\/td>\r\nMere account mein kitna balance hai? - On building voice enabled Banking Services for Multilingual Communities<\/a><\/td>\r\nAkshat Gupta<\/td>\r\n<\/tr>\r\n
    23:15-23:30<\/td>\r\nPaperS2<\/td>\r\nKalika Bali and Khyathi Chandu<\/td>\r\nInvestigating Modelling Techniques for Natural Language Inference on Code-Switched Dialogues in Bollywood Movies<\/a><\/td>\r\nAnjana Umapathy<\/td>\r\n<\/tr>\r\n
    23:30-23:40<\/td>\r\nPaperS2Q&A<\/td>\r\nKalika Bali and Khyathi Chandu<\/td>\r\nQ&A<\/td>\r\n<\/tr>\r\n
    Day 2:\u00a0<\/strong><\/span><\/td>\r\n<\/td>\r\nSaturday, 31 October 2020<\/strong><\/span><\/td>\r\n<\/tr>\r\n
    Time (CST)<\/strong><\/td>\r\nSession<\/strong><\/td>\r\nSession Chairs<\/strong><\/td>\r\nTitle<\/strong><\/td>\r\n<\/tr>\r\n
    20:30-20:45<\/td>\r\nSharedTask<\/td>\r\nSunayana Sitaram and Gustavo Aguilar<\/td>\r\nOpening Remarks and description of shared task<\/a><\/td>\r\nSunayana Sitaram<\/td>\r\n<\/tr>\r\n
    20:45-20:55<\/td>\r\nSharedTask<\/td>\r\nSunayana Sitaram and Gustavo Aguilar<\/td>\r\nVocapia-LIMSI System for 2020 Shared Task on Code-switched Spoken Language Identification<\/a><\/td>\r\nClaude Barras<\/td>\r\n<\/tr>\r\n
    20:55-21:05<\/td>\r\nSharedTask<\/td>\r\nSunayana Sitaram and Gustavo Aguilar<\/td>\r\nExploiting Spectral Augmentation for Code-Switched Spoken Language Identification<\/a><\/td>\r\nPradeep R<\/td>\r\n<\/tr>\r\n
    21:05-21:15<\/td>\r\nSharedTask<\/td>\r\nSunayana Sitaram and Gustavo Aguilar<\/td>\r\nOn detecting code mixing in speech using Discrete latent representations<\/a><\/td>\r\nSai Krishna Rallabandi<\/td>\r\n<\/tr>\r\n
    21:15-21:25<\/td>\r\nSharedTask<\/td>\r\nSunayana Sitaram and Gustavo Aguilar<\/td>\r\nLanguage Identification for Code-Mixed Indian Languages In The Wild<\/a><\/td>\r\nParav Nagarsheth<\/td>\r\n<\/tr>\r\n
    21:25-21:35<\/td>\r\nSharedTask<\/td>\r\nSunayana Sitaram and Gustavo Aguilar<\/td>\r\nUtterance-level Code-Switching Identification using Transformer Network<\/a><\/td>\r\nKrishna DN<\/td>\r\n<\/tr>\r\n
    21:35-21:45<\/td>\r\nSharedTaskQ&A<\/td>\r\nSunayana Sitaram and Gustavo Aguilar<\/td>\r\nQ&A<\/td>\r\n<\/tr>\r\n
    21:45-22:00<\/td>\r\nBreak<\/td>\r\n<\/td>\r\n<\/tr>\r\n
    22:00-22:10<\/td>\r\nSponsorTalk<\/td>\r\nSpeechOcean<\/td>\r\n<\/td>\r\nYufeng Hao<\/td>\r\n<\/tr>\r\n
    22:10-22:25<\/td>\r\nPaperS3<\/td>\r\nGenta Indra Winata and Sai Krishna Rallabandi<\/td>\r\nLearning not to Discriminate: Task Agnostic Learning for Improving Monolingual and Code-switched Speech Recognition<\/a><\/td>\r\nSanket Shah<\/td>\r\n<\/tr>\r\n
    22:25-22:40<\/td>\r\nPaperS3<\/td>\r\nGenta Indra Winata and Sai Krishna Rallabandi<\/td>\r\nMultilingual Bottleneck Features for Improving ASR Performance of Code-Switched Speech in Under-Resourced Languages<\/a><\/td>\r\nTrideba Padhi<\/td>\r\n<\/tr>\r\n
    22:40-22:55<\/td>\r\nPaperS3<\/td>\r\nGenta Indra Winata and Sai Krishna Rallabandi<\/td>\r\nThe ASRU 2019 Mandarin-English Code-Switching Speech Recognition Challenge: Open Datasets, Tracks, Methods and Results<\/a><\/td>\r\nXian Shi<\/td>\r\n<\/tr>\r\n
    22:55-23:10<\/td>\r\nPaperS3Q&A<\/td>\r\nGenta Indra Winata and Sai Krishna Rallabandi<\/td>\r\nQ&A<\/td>\r\n<\/tr>\r\n
    23:10-23:20<\/td>\r\n<\/td>\r\nClosing remarks<\/td>\r\n<\/tr>\r\n<\/tbody>\r\n<\/table>"},{"id":2,"name":"Keynote","content":"Title: Points of connection between linguistics and speech technology with regard to code-switching<\/strong>\r\n\r\nThe study of multilingualism presents a unique challenge within the discipline of linguistics since, without exception, the major linguistic theories have been developed from a monolingual orientation. However, no language is completely insulated from all others; there is invariably some evidence of language contact in every grammar. In the speech of multilinguals, these effects can be significant. In this talk, we focus on the overt forms of language contact, as manifested by the phenomena of borrowing and code-switching. We will also touch on the covert form of contact, what we call convergence. Our aim is threefold: (i) to provide a comprehensive overview of the syntactic, lexical, phonetic, and pragmatic effects of borrowing, code-switching, and convergence; (iii) to examine the theories that attempt to account for linguistic patterns of codeswitching and borrowing; and (iii) to highlight points of connection to speech technologies.\r\n\r\n\"aBarbara E. Bullock<\/u> (Ph.D., Linguistics, University of Delaware 1991) is Professor of Linguistics in the Department of French & Italian at the University of Texas. She specializes in the effects of bilingualism and language contact on linguistic structure, particularly on the phonetic systems. Her research projects investigate sociophonetics, code-switching and borrowing, language variation and change, and computational approaches to multilingualism. With colleagues and students, she has begun to explore the power of corpus linguistics and NLP as effective tools in research on bilingual speech forms working to quantify and visualize language mixing and its intermittency to enable cross-corpus comparisons and linguistic generalizations.\r\n\r\n \r\n\r\n\"aAlmeida Jacqueline Toribio<\/u> (Ph.D., Linguistics, Cornell University 1993) is Professor of Linguistics in the Department of Spanish and Portuguese at the University of Texas. Her research in formal linguistics investigates patterns of morphological and syntactic variation across languages and dialects as well as structural patterns of language mixing in bilingual code-switching; her complementary work in sociolinguistics considers the ways in which variables such as ethnicity, race, gender, literacy, and national origin are encoded through linguistic features and language choices. Her investigations employ diverse methods, from experimental elicitation, to ethnographies of rural and urban communities, to computational analyses of literary texts and popular media."},{"id":3,"name":"Shared Task","content":"We will be organizing a shared task on Code-switched Spoken Language Identification (LID) <\/strong>in three language pairs - Gujarati-English, Telugu-English and Tamil-English. The shared task will consist of two subtasks:\r\n\r\nSubtask A<\/strong>: Utterance-level identification of monolingual vs. code-switched utterances\r\n\r\nSubtask B<\/strong>: Frame-level identification of language in a code-switched utterance.\r\n\r\nRegistration for the shared task has started. Please fill the form available at this link<\/a>. Participants will get download links to the data via email once they register with their email address.\r\n\r\nMore details about the shared task and baselines can be found here<\/a>.\r\n\r\nShared task rules:<\/strong>\r\n
      \r\n \t
    1. To participate in the Shared Task, you must register and consent to the agreement at the \u201cRegister\u201d page and download the data. Participants may not share the data with any person or organization without Microsoft\u2019s prior written consent.<\/li>\r\n \t
    2. Participants are required to use only the data released for the shared task for models submitted during the testing period.\u00a0 If desired, they can report scores using additional external data in the paper they submit to the workshop.<\/li>\r\n \t
    3. Participants may choose to use the corresponding language\u2019s data to build each system or combine the data and use it cross-lingually.<\/li>\r\n \t
    4. Participants may build systems for any number of languages, even if they all use the data.<\/li>\r\n \t
    5. Only the audio for the blind test sets will be released. Participants are expected to run their systems on the blind test sets and submit the label files.<\/li>\r\n \t
    6. Participants may form teams, and participants can be part of multiple teams. All team members names should be clearly mentioned in the submission email. These team members are expected to be co-authors of the paper that each team will submit to the workshop.<\/li>\r\n \t
    7. Participants can submit up to three models per task (task A and B) per language pair per team during the testing period. This means that each team can submit a total of up to 18 models (3 models, 3 language pairs, 2 tasks). Any additional models submitted after the first three per language pair per task will not be considered for evaluation.<\/li>\r\n \t
    8. Participants may also use training and dev monolingual Gujarati, Tamil and Telugu data previously released by us available here<\/a> to train their models. Participants should not use the test data available at this link.<\/li>\r\n \t
    9. The systems submitted are expected to beat the baseline system in terms of Accuracy and EER, however, innovative systems that come close to the baseline may be considered.<\/li>\r\n \t
    10. Participants must submit the following items for evaluation: (1) the results files; (2) the final LID models; and (3) the research paper so the shared task organizers can reproduce the results against the blind set.<\/li>\r\n<\/ol>\r\nSubmission format:<\/strong>\r\n\r\nAll participants who have registered before 21 April 2020 will be sent an email on their registered email id with links to download test data on 27th April 2020. Participants will have till 17:00 IST on 29th April 2020 to submit their results files for evaluation.\r\n\r\nParticipants need to send an email to CSWorkshop2020@microsoft.com with the subject \"TaskA\/TaskB language CS Workshop 2020 Shared Task Evaluation\". Here language will be TA, TE or GU (Tamil-English, Telugu-English or Gujarati-English). For example, the subject line will be \"TaskA TA CS Workshop 2020 Shared Task Evaluation\" or \"TaskB GU CS Workshop 2020 Shared Task Evaluation\". Please include names of all team members in the body of the email, as well as a team name.\r\n\r\nPlease follow exact guidelines for subject and file format as the evaluation will be done automatically. In case there is a problem with the format, you will receive an email and you can resubmit. A failed submission will not be counted in the allowed attempts.\r\n\r\nParticipants should attach up to 3 results files along with each email. Results for each language will have to be submitted separately in separate emails.\r\n\r\nParticipants should submit CSV files with the formats specified below\r\n\r\nTask A:<\/strong>\r\n\r\nFile name: TaskA-language-modelname.csv where language = TA\/TE\/GU and modelname is a name of your choice\r\n\r\nFile format:\r\n\r\nfilename1,0\/1\r\n\r\nfilename2,0\/1\r\n\r\nfilename3,0\/1\r\n\r\nwhere 0 represents code-switched and 1 represents Monolingual\r\n\r\nTask B:<\/strong>\r\n\r\nFile name: TaskB-language-modelname.csv where language = TA\/TE\/GU and modelname is a name of your choice\r\n\r\nFile format:\r\n\r\nfilename1, Space separated sequence of language tags for every 200ms\r\n\r\nfilename2, Space separated sequence of language tags for every 200ms\r\n\r\nfilename3, Space separated sequence of language tags for every 200ms\r\n\r\nwhere the language tags are E (English), G (Gujarati), T (Tamil) and T (Telugu) or S (Silence).\r\n\r\nPlease note: For audio that cannot be divided exactly into 200ms frames, for example, if the audio is 4.56 seconds long then only 4.40 seconds of the audio will be considered for testing and the last 160ms of the audio will be ignored\r\n\r\nContact: In case you have questions about the shared task, please contact us at sunayana.sitaram@microsoft.com<\/a>"},{"id":4,"name":"Leaderboard","content":"\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n
      Task A<\/strong><\/td>\r\n<\/td>\r\n<\/td>\r\n<\/td>\r\n<\/td>\r\n<\/td>\r\n<\/td>\r\n<\/td>\r\n<\/td>\r\n<\/td>\r\n<\/td>\r\n<\/tr>\r\n
      Gujarati<\/strong><\/td>\r\n\u00a0<\/strong><\/td>\r\n\u00a0<\/strong><\/td>\r\n\u00a0<\/strong><\/td>\r\nTelugu<\/strong><\/td>\r\n\u00a0<\/strong><\/td>\r\n\u00a0<\/strong><\/td>\r\n\u00a0<\/strong><\/td>\r\nTamil<\/strong><\/td>\r\n<\/td>\r\n<\/td>\r\n<\/tr>\r\n
      Team Name<\/td>\r\nAccuracy<\/td>\r\nEER<\/td>\r\n<\/td>\r\nTeam Name<\/td>\r\nAccuracy<\/td>\r\nEER<\/td>\r\n<\/td>\r\nTeam Name<\/td>\r\nAccuracy<\/td>\r\nEER<\/td>\r\n<\/tr>\r\n
      VocapiaLIMSI<\/td>\r\n0.75<\/td>\r\n0.12<\/td>\r\n<\/td>\r\nVocapiaLIMSI<\/td>\r\n0.79<\/td>\r\n0.10<\/td>\r\n<\/td>\r\nVocapiaLIMSI<\/td>\r\n0.79<\/td>\r\n0.10<\/td>\r\n<\/tr>\r\n
      Swiggy<\/td>\r\n0.70<\/td>\r\n0.15<\/td>\r\n<\/td>\r\nSwiggy<\/td>\r\n0.79<\/td>\r\n0.10<\/td>\r\n<\/td>\r\nSwiggy<\/td>\r\n0.79<\/td>\r\n0.10<\/td>\r\n<\/tr>\r\n
      Ground Zero<\/td>\r\n0.55<\/td>\r\n0.22<\/td>\r\n<\/td>\r\nCMU<\/td>\r\n0.74<\/td>\r\n0.13<\/td>\r\n<\/td>\r\nCMU<\/td>\r\n0.73<\/td>\r\n0.13<\/td>\r\n<\/tr>\r\n
      CMU<\/td>\r\n0.50<\/td>\r\n0.25<\/td>\r\n<\/td>\r\nSizzle<\/td>\r\n0.71<\/td>\r\n0.14<\/td>\r\n<\/td>\r\nGround Zero<\/td>\r\n0.67<\/td>\r\n0.16<\/td>\r\n<\/tr>\r\n
      Sizzle<\/td>\r\n0.47<\/td>\r\n0.26<\/td>\r\n<\/td>\r\nGround Zero<\/td>\r\n0.67<\/td>\r\n0.16<\/td>\r\n<\/td>\r\nSizzle<\/td>\r\n0.55<\/td>\r\n0.22<\/td>\r\n<\/tr>\r\n
      <\/td>\r\n<\/td>\r\n<\/td>\r\n<\/td>\r\n<\/td>\r\n<\/td>\r\n<\/td>\r\n<\/td>\r\n<\/td>\r\n<\/td>\r\n<\/td>\r\n<\/tr>\r\n
      \u00a0<\/strong>\r\n\r\nTask B<\/strong><\/td>\r\n\u00a0<\/strong><\/td>\r\n\u00a0<\/strong><\/td>\r\n\u00a0<\/strong><\/td>\r\n\u00a0<\/strong><\/td>\r\n\u00a0<\/strong><\/td>\r\n\u00a0<\/strong><\/td>\r\n\u00a0<\/strong><\/td>\r\n\u00a0<\/strong><\/td>\r\n<\/td>\r\n<\/td>\r\n<\/tr>\r\n
      Gujarati<\/strong><\/td>\r\n\u00a0<\/strong><\/td>\r\n\u00a0<\/strong><\/td>\r\n\u00a0<\/strong><\/td>\r\nTelugu<\/strong><\/td>\r\n\u00a0<\/strong><\/td>\r\n\u00a0<\/strong><\/td>\r\n\u00a0<\/strong><\/td>\r\nTamil<\/strong><\/td>\r\n<\/td>\r\n<\/td>\r\n<\/tr>\r\n
      Team Name<\/td>\r\nAccuracy<\/td>\r\nEER<\/td>\r\n<\/td>\r\nTeam Name<\/td>\r\nAccuracy<\/td>\r\nEER<\/td>\r\n<\/td>\r\nTeam Name<\/td>\r\nAccuracy<\/td>\r\nEER<\/td>\r\n<\/tr>\r\n
      VocapiaLIMSI<\/td>\r\n0.78<\/td>\r\n0.06<\/td>\r\n<\/td>\r\nVocapiaLIMSI<\/td>\r\n0.79<\/td>\r\n0.06<\/td>\r\n<\/td>\r\nVocapiaLIMSI<\/td>\r\n0.78<\/td>\r\n0.06<\/td>\r\n<\/tr>\r\n
      Swiggy<\/td>\r\n0.75<\/td>\r\n0.07<\/td>\r\n<\/td>\r\nSwiggy<\/td>\r\n0.74<\/td>\r\n0.07<\/td>\r\n<\/td>\r\nSwiggy<\/td>\r\n0.74<\/td>\r\n0.07<\/td>\r\n<\/tr>\r\n<\/tbody>\r\n<\/table>"}],"msr_startdate":"2020-10-30","msr_enddate":"2020-10-31","msr_event_time":"","msr_location":"Virtual\/Online","msr_event_link":"http:\/\/www.aka.ms\/CSworkshop2020","msr_event_recording_link":"","msr_startdate_formatted":"October 30, 2020","msr_register_text":"Watch now","msr_cta_link":"http:\/\/www.aka.ms\/CSworkshop2020","msr_cta_text":"Watch now","msr_cta_bi_name":"Event Register","featured_image_thumbnail":"\"\"","event_excerpt":"Code-switching is the use of multiple languages in the same utterance and is common in multilingual communities across the world. Code-switching poses many challenges to speech and NLP systems and has gained widespread interest in academia and industry recently. We organized special sessions on code-switching at Interspeech 2017, 2018 and 2019. In 2020, we will be organizing this as an online-only workshop co-located (virtually) with Interspeech 2020.","msr_research_lab":[199562],"related-researchers":[],"msr_impact_theme":[],"related-academic-programs":[],"related-groups":[],"related-projects":[],"related-opportunities":[],"related-publications":[],"related-videos":[],"related-posts":[],"_links":{"self":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-event\/629775","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-event"}],"about":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/types\/msr-event"}],"version-history":[{"count":17,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-event\/629775\/revisions"}],"predecessor-version":[{"id":701731,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-event\/629775\/revisions\/701731"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media\/627609"}],"wp:attachment":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media?parent=629775"}],"wp:term":[{"taxonomy":"msr-research-area","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/research-area?post=629775"},{"taxonomy":"msr-region","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-region?post=629775"},{"taxonomy":"msr-event-type","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-event-type?post=629775"},{"taxonomy":"msr-video-type","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-video-type?post=629775"},{"taxonomy":"msr-locale","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-locale?post=629775"},{"taxonomy":"msr-program-audience","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-program-audience?post=629775"},{"taxonomy":"msr-post-option","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-post-option?post=629775"},{"taxonomy":"msr-impact-theme","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-impact-theme?post=629775"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}