{"version":"1.0","provider_name":"Salzburg Research Forschungsgesellschaft","provider_url":"https:\/\/www.salzburgresearch.at\/en\/","author_name":"Birgit Strohmeier","author_url":"https:\/\/www.salzburgresearch.at\/en\/author\/birgit\/","title":"Multilingual speech control for ROS-driven robots - Salzburg Research Forschungsgesellschaft","type":"rich","width":600,"height":338,"html":"<blockquote class=\"wp-embedded-content\" data-secret=\"mPO8UjI6sn\"><a href=\"https:\/\/www.salzburgresearch.at\/en\/publikation\/multilingual-speech-control-for-ros-driven-robots\/\">Multilingual speech control for ROS-driven robots<\/a><\/blockquote><iframe sandbox=\"allow-scripts\" security=\"restricted\" src=\"https:\/\/www.salzburgresearch.at\/en\/publikation\/multilingual-speech-control-for-ros-driven-robots\/embed\/#?secret=mPO8UjI6sn\" width=\"600\" height=\"338\" title=\"&#8220;Multilingual speech control for ROS-driven robots&#8221; &#8212; Salzburg Research Forschungsgesellschaft\" data-secret=\"mPO8UjI6sn\" frameborder=\"0\" marginwidth=\"0\" marginheight=\"0\" scrolling=\"no\" class=\"wp-embedded-content\"><\/iframe><script type=\"text\/javascript\">\n\/* <![CDATA[ *\/\n\/*! This file is auto-generated *\/\n!function(d,l){\"use strict\";l.querySelector&&d.addEventListener&&\"undefined\"!=typeof URL&&(d.wp=d.wp||{},d.wp.receiveEmbedMessage||(d.wp.receiveEmbedMessage=function(e){var t=e.data;if((t||t.secret||t.message||t.value)&&!\/[^a-zA-Z0-9]\/.test(t.secret)){for(var s,r,n,a=l.querySelectorAll('iframe[data-secret=\"'+t.secret+'\"]'),o=l.querySelectorAll('blockquote[data-secret=\"'+t.secret+'\"]'),c=new RegExp(\"^https?:$\",\"i\"),i=0;i<o.length;i++)o[i].style.display=\"none\";for(i=0;i<a.length;i++)s=a[i],e.source===s.contentWindow&&(s.removeAttribute(\"style\"),\"height\"===t.message?(1e3<(r=parseInt(t.value,10))?r=1e3:~~r<200&&(r=200),s.height=r):\"link\"===t.message&&(r=new URL(s.getAttribute(\"src\")),n=new URL(t.value),c.test(n.protocol))&&n.host===r.host&&l.activeElement===s&&(d.top.location.href=t.value))}},d.addEventListener(\"message\",d.wp.receiveEmbedMessage,!1),l.addEventListener(\"DOMContentLoaded\",function(){for(var e,t,s=l.querySelectorAll(\"iframe.wp-embedded-content\"),r=0;r<s.length;r++)(t=(e=s[r]).getAttribute(\"data-secret\"))||(t=Math.random().toString(36).substring(2,12),e.src+=\"#?secret=\"+t,e.setAttribute(\"data-secret\",t)),e.contentWindow.postMessage({message:\"ready\",secret:t},\"*\")},!1)))}(window,document);\n\/\/# sourceURL=https:\/\/www.salzburgresearch.at\/wp-includes\/js\/wp-embed.min.js\n\/* ]]> *\/\n<\/script>\n","description":"https:\/\/doi.org\/10.1007\/s00502-019-00739-y To improve the collaboration between humans and robots, multilingual speech control (MLS) can be used to easily manage multiple robots at any time by spoken commands. Once a command is recognised by one of the corresponding ROS-driven robots inside the network, it will be executed and a related audio feedback is provided to the user. Our MLS implementation has a modular design, so that single functional modules can be implemented by either online cloud-based services or by local offline software for increased privacy. Furthermore, the extensible design allows to meet future user needs or to be adapted to different robot capabilities. The MLS follows a principal workflow: Initially, a language identification analysis is done, followed by speech-to-text transformation. Afterwards, the intent is detected and possible variables are analysed for the interpretation of the command, which is furthermore sent to the corresponding robot. Finally, the robot will publish the state achieved by the command execution [&hellip;]"}