{"id":876411,"date":"2022-09-08T10:55:18","date_gmt":"2022-09-08T17:55:18","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/?post_type=msr-blog-post&p=876411"},"modified":"2022-09-08T11:18:40","modified_gmt":"2022-09-08T18:18:40","slug":"eccv-workshop-on-computer-vision-in-the-wild","status":"publish","type":"msr-blog-post","link":"https:\/\/www.microsoft.com\/en-us\/research\/articles\/eccv-workshop-on-computer-vision-in-the-wild\/","title":{"rendered":"ECCV Workshop on “Computer Vision in the Wild”"},"content":{"rendered":"\n
\"a (opens in new tab)<\/span><\/a>
Please join the Workshop & Challenge on “Computer Vision in the Wild<\/em> (opens in new tab)<\/span><\/a>\u2019\u2019 at #ECCV2022<\/figcaption><\/figure>\n\n\n\n

Website: https:\/\/computer-vision-in-the-wild.github.io\/eccv-2022\/ (opens in new tab)<\/span><\/a><\/p>\n\n\n\n

Workshop<\/strong>: The research community has recently witnessed a trend in building transferable visual models that can effortlessly adapt<\/em> to a wide range of downstream computer vision (CV) and multimodal (MM) tasks<\/em>. We are organizing this “Computer Vision in the Wild” workshop, aiming to gather academic and industry communities to work on CV problems in real-world scenarios, focusing on the challenge of open-set\/domain visual recognition and efficient task-level transfer. Since there is no established benchmarks to measure the progress of “CV in the Wild”, we develop new benchmarks for image classification and object detection, to measure the task-level transfer ability of various models\/methods over diverse real-world datasets, in terms of both prediction accuracy and adaption efficiency. <\/p>\n\n\n\n

Challenge<\/strong>: This workshop will also host two challenges based on the ELEVATER benchmarks (opens in new tab)<\/span><\/a>. It is a platform with 20 image classification and 35 object detection public datasets for evaluating language-image models in task-level visual transfer, measuring both sample-efficiency (#training samples) and parameter-efficiency (#trainable parameters). The two challenges are: <\/p>\n\n\n\n