{"id":2855,"date":"2025-01-08T09:31:09","date_gmt":"2025-01-08T09:31:09","guid":{"rendered":"https:\/\/bullseye.ac\/blog\/?p=2855"},"modified":"2025-01-08T09:31:11","modified_gmt":"2025-01-08T09:31:11","slug":"the-future-of-robotics-integrating-touch-and-vision","status":"publish","type":"post","link":"https:\/\/bullseye.ac\/blog\/technology\/the-future-of-robotics-integrating-touch-and-vision\/","title":{"rendered":"The Future of Robotics: Integrating Touch and Vision"},"content":{"rendered":"\n<p>Number of words: 398<\/p>\n\n\n\n<p>When able, humans use their senses in tandem. Hearing a voice, a person turns around to see if it&#8217;s their friend. Touching an object can offer tactile information, but viewing it can confirm its true identity. But robots can&#8217;t use their senses in tandem as easily, which is why scientists at MIT&#8217;s Computer Science and Artificial Intelligence Lab (CSAIL) have worked at correcting what they call a robotic &#8220;sensory gap.&#8221;<\/p>\n\n\n\n<p>To connect sight and touch, the CSAIL engineers worked with a KUKA robot arm, a type often used in industrial warehouses. The scientists outfitted it with a special tactile sensor called GelSight\u2014a slab of transparent, synthetic rubber that works as an imaging system. Objects are pressed into GelSight, and then cameras surrounding the slab monitor the impressions.<\/p>\n\n\n\n<p>With a common webcam, the CSAIL team recorded almost 200 objects, including tools, household products, and fabrics being touched by the robot arm over 12,000 times. That created a trove of video clips that the team could break down into 3 million static images, creating a dataset they termed \u201cVisGel.\u201d\u201cBy looking at the scene, our model can imagine the feeling of touching a flat surface or a sharp edge,\u201d says Yunzhu Li, CSAIL Ph.D. student and lead author on a new paper about the system, in a press statement.\u201cBy blindly touching around,\u201d Li says, \u201cour model can predict the interaction with the environment purely from tactile feelings. Bringing these two senses together could empower the robot and reduce the data we might need for tasks involving manipulating and grasping objects.\u201d<\/p>\n\n\n\n<p>The CSAIL team then combined the VisGel dataset with what are known as generative adversarial networks, or GANs. GANs are deep neural net architectures comprised of two nets, according to an explainer from AI company Skymind, with the potential to imitate images, music, speech, and prose. They&#8217;re often associated with artistic uses of AI. In 2018, the auction house Christie&#8217;s sold a painting generated by a GAN for $432,000.<\/p>\n\n\n\n<p>GANs work with two neural nets in competition against each other. One net is deemed the &#8220;generator,&#8221; while the other is called the &#8220;discriminator.&#8221; The generator creates images that it tries to make look real; the discriminator tries to prove the images are created. Every time the discriminator wins the battle, the generator is forced to examine its own internal logic, creating and hopefully refining into a better system.<\/p>\n\n\n\n<p><em>Excerpted from<\/em><em>h<\/em><a href=\"https:\/\/www.popularmechanics.com\/technology\/robots\/a28068165\/mit-teaching-robots-combine-sight-touch\/\" target=\"_blank\" rel=\"noreferrer noopener\"><em>ttps:\/\/www.popularmechanics.com\/technology\/robots\/a28068165\/mit-teaching-robots-combine-sight-touch\/<\/em><\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Number of words: 398 When able, humans use their senses in tandem. Hearing a voice, a person turns around to see if it&#8217;s their friend. Touching an object can offer tactile information, but viewing it can confirm its true identity. But robots can&#8217;t use their senses in tandem as easily, which is why scientists at &#8230; <a title=\"The Future of Robotics: Integrating Touch and Vision\" class=\"read-more\" href=\"https:\/\/bullseye.ac\/blog\/technology\/the-future-of-robotics-integrating-touch-and-vision\/\" aria-label=\"More on The Future of Robotics: Integrating Touch and Vision\">Read more<\/a><\/p>\n","protected":false},"author":3,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_eb_attr":"","_uag_custom_page_level_css":"","footnotes":""},"categories":[10],"tags":[],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v21.5 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>The Future of Robotics: Integrating Touch and Vision - BullsEye<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/bullseye.ac\/blog\/technology\/the-future-of-robotics-integrating-touch-and-vision\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"The Future of Robotics: Integrating Touch and Vision - BullsEye\" \/>\n<meta property=\"og:description\" content=\"Number of words: 398 When able, humans use their senses in tandem. Hearing a voice, a person turns around to see if it&#8217;s their friend. Touching an object can offer tactile information, but viewing it can confirm its true identity. But robots can&#8217;t use their senses in tandem as easily, which is why scientists at ... Read more\" \/>\n<meta property=\"og:url\" content=\"https:\/\/bullseye.ac\/blog\/technology\/the-future-of-robotics-integrating-touch-and-vision\/\" \/>\n<meta property=\"og:site_name\" content=\"BullsEye\" \/>\n<meta property=\"article:published_time\" content=\"2025-01-08T09:31:09+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2025-01-08T09:31:11+00:00\" \/>\n<meta name=\"author\" content=\"Bhavya Chowdhury\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Bhavya Chowdhury\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"2 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"WebPage\",\"@id\":\"https:\/\/bullseye.ac\/blog\/technology\/the-future-of-robotics-integrating-touch-and-vision\/\",\"url\":\"https:\/\/bullseye.ac\/blog\/technology\/the-future-of-robotics-integrating-touch-and-vision\/\",\"name\":\"The Future of Robotics: Integrating Touch and Vision - BullsEye\",\"isPartOf\":{\"@id\":\"https:\/\/bullseye.ac\/blog\/#website\"},\"datePublished\":\"2025-01-08T09:31:09+00:00\",\"dateModified\":\"2025-01-08T09:31:11+00:00\",\"author\":{\"@id\":\"https:\/\/bullseye.ac\/blog\/#\/schema\/person\/992754c8575e3584d4c0dbcab059dd23\"},\"breadcrumb\":{\"@id\":\"https:\/\/bullseye.ac\/blog\/technology\/the-future-of-robotics-integrating-touch-and-vision\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/bullseye.ac\/blog\/technology\/the-future-of-robotics-integrating-touch-and-vision\/\"]}]},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/bullseye.ac\/blog\/technology\/the-future-of-robotics-integrating-touch-and-vision\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/bullseye.ac\/blog\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"The Future of Robotics: Integrating Touch and Vision\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/bullseye.ac\/blog\/#website\",\"url\":\"https:\/\/bullseye.ac\/blog\/\",\"name\":\"BullsEye\",\"description\":\"\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/bullseye.ac\/blog\/?s={search_term_string}\"},\"query-input\":\"required name=search_term_string\"}],\"inLanguage\":\"en-US\"},{\"@type\":\"Person\",\"@id\":\"https:\/\/bullseye.ac\/blog\/#\/schema\/person\/992754c8575e3584d4c0dbcab059dd23\",\"name\":\"Bhavya Chowdhury\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/bullseye.ac\/blog\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/96cc080647ada77871a0fe51c103b135?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/96cc080647ada77871a0fe51c103b135?s=96&d=mm&r=g\",\"caption\":\"Bhavya Chowdhury\"},\"url\":\"https:\/\/bullseye.ac\/blog\/author\/bhavya-chowdhury\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"The Future of Robotics: Integrating Touch and Vision - BullsEye","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/bullseye.ac\/blog\/technology\/the-future-of-robotics-integrating-touch-and-vision\/","og_locale":"en_US","og_type":"article","og_title":"The Future of Robotics: Integrating Touch and Vision - BullsEye","og_description":"Number of words: 398 When able, humans use their senses in tandem. Hearing a voice, a person turns around to see if it&#8217;s their friend. Touching an object can offer tactile information, but viewing it can confirm its true identity. But robots can&#8217;t use their senses in tandem as easily, which is why scientists at ... Read more","og_url":"https:\/\/bullseye.ac\/blog\/technology\/the-future-of-robotics-integrating-touch-and-vision\/","og_site_name":"BullsEye","article_published_time":"2025-01-08T09:31:09+00:00","article_modified_time":"2025-01-08T09:31:11+00:00","author":"Bhavya Chowdhury","twitter_card":"summary_large_image","twitter_misc":{"Written by":"Bhavya Chowdhury","Est. reading time":"2 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"WebPage","@id":"https:\/\/bullseye.ac\/blog\/technology\/the-future-of-robotics-integrating-touch-and-vision\/","url":"https:\/\/bullseye.ac\/blog\/technology\/the-future-of-robotics-integrating-touch-and-vision\/","name":"The Future of Robotics: Integrating Touch and Vision - BullsEye","isPartOf":{"@id":"https:\/\/bullseye.ac\/blog\/#website"},"datePublished":"2025-01-08T09:31:09+00:00","dateModified":"2025-01-08T09:31:11+00:00","author":{"@id":"https:\/\/bullseye.ac\/blog\/#\/schema\/person\/992754c8575e3584d4c0dbcab059dd23"},"breadcrumb":{"@id":"https:\/\/bullseye.ac\/blog\/technology\/the-future-of-robotics-integrating-touch-and-vision\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/bullseye.ac\/blog\/technology\/the-future-of-robotics-integrating-touch-and-vision\/"]}]},{"@type":"BreadcrumbList","@id":"https:\/\/bullseye.ac\/blog\/technology\/the-future-of-robotics-integrating-touch-and-vision\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/bullseye.ac\/blog\/"},{"@type":"ListItem","position":2,"name":"The Future of Robotics: Integrating Touch and Vision"}]},{"@type":"WebSite","@id":"https:\/\/bullseye.ac\/blog\/#website","url":"https:\/\/bullseye.ac\/blog\/","name":"BullsEye","description":"","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/bullseye.ac\/blog\/?s={search_term_string}"},"query-input":"required name=search_term_string"}],"inLanguage":"en-US"},{"@type":"Person","@id":"https:\/\/bullseye.ac\/blog\/#\/schema\/person\/992754c8575e3584d4c0dbcab059dd23","name":"Bhavya Chowdhury","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/bullseye.ac\/blog\/#\/schema\/person\/image\/","url":"https:\/\/secure.gravatar.com\/avatar\/96cc080647ada77871a0fe51c103b135?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/96cc080647ada77871a0fe51c103b135?s=96&d=mm&r=g","caption":"Bhavya Chowdhury"},"url":"https:\/\/bullseye.ac\/blog\/author\/bhavya-chowdhury\/"}]}},"uagb_featured_image_src":{"full":false,"thumbnail":false,"medium":false,"medium_large":false,"large":false,"1536x1536":false,"2048x2048":false},"uagb_author_info":{"display_name":"Bhavya Chowdhury","author_link":"https:\/\/bullseye.ac\/blog\/author\/bhavya-chowdhury\/"},"uagb_comment_info":0,"uagb_excerpt":"Number of words: 398 When able, humans use their senses in tandem. Hearing a voice, a person turns around to see if it&#8217;s their friend. Touching an object can offer tactile information, but viewing it can confirm its true identity. But robots can&#8217;t use their senses in tandem as easily, which is why scientists at&hellip;","_links":{"self":[{"href":"https:\/\/bullseye.ac\/blog\/wp-json\/wp\/v2\/posts\/2855"}],"collection":[{"href":"https:\/\/bullseye.ac\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/bullseye.ac\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/bullseye.ac\/blog\/wp-json\/wp\/v2\/users\/3"}],"replies":[{"embeddable":true,"href":"https:\/\/bullseye.ac\/blog\/wp-json\/wp\/v2\/comments?post=2855"}],"version-history":[{"count":1,"href":"https:\/\/bullseye.ac\/blog\/wp-json\/wp\/v2\/posts\/2855\/revisions"}],"predecessor-version":[{"id":2856,"href":"https:\/\/bullseye.ac\/blog\/wp-json\/wp\/v2\/posts\/2855\/revisions\/2856"}],"wp:attachment":[{"href":"https:\/\/bullseye.ac\/blog\/wp-json\/wp\/v2\/media?parent=2855"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/bullseye.ac\/blog\/wp-json\/wp\/v2\/categories?post=2855"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/bullseye.ac\/blog\/wp-json\/wp\/v2\/tags?post=2855"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}