Ministerio de Ciencia e Innovación
Castellano     English    



During last years, the necessity of developing assistant robots has arisen. This is a consequence of the increasing complexity of our society and economy, our greater demands of well-being, a more and more aged population, and a great amount of people with difficulty to handle devices and access the new technologies.

The purpose of this project is the development of an automatic bellboy system based on a series of moving platforms that interact with customers and service personnel to execute different tasks in hotel environments. The development is being carried out in two complementary areas: mobile robots and social robots, which justifies collaboration of two research groups (University of Vigo y el Technology Center Cartif).

In the mobile robotics field, this project helps to improve the robotics development environment RIDE (Integrated Development Environment for Robotics) that has been already used in other projects. One of the objectives of the current project is to generalize the development environment, which has already been applied successfully in the creation of monitoring applications, and thus extend its operation to allow the creation of any Web application based on mobile robots. RIDE includes a tool for the device (sensor and actuator) integration using the RoboCAN protocol [can]. The connection of the control modules is accomplished using IPC. The connection between robots and Web terminals is accomplish using JIPC. Task programming and scheduling are performed with RoboGraph that uses Petri nets. In addition, the environment also includes a set of modules for navigation and interaction with people.

In the field of social robotics, researches from CARTIF continue with the development of robust sensing systems: face recognition, emotions and gestures recognition. They have also initiated the development of new architectures for conversational social robots, agents and new mechanisms for learning through mentoring and imitation.

Finally, based on the experience of both groups, we have created a prototype to be tested in a hotel environment that is going to be used to test all this research.

Return to index


Next figures shows a possible application scenery.


In this scene, an assistant robot named BellBot (Bellboy Robot) has been developed in order to give service in hotels. This application scope will allow a better validation of current systems and navigation techniques as well as the RIDE development tools.

Different robots are connected to a central station, to which are also connected the terminals in guest rooms, front desk and other hotel facilities. Guests' GUI allows them to order services and check the state of those orders. The second figure in this section shows the main window of the GUI available for guests to make their requests.

BellBot GUI

On the other hand, from the front desk GUI the hotel management can monitor the state of the various robots and requests that guests have done. Regarding robots state, their positions on maps of the different floors, sensors information, work in progress, etc. are shown in the GUI. The third image in this section shows this graphic interface main window where we can see the robots positions in the selected map as well as other information concerning that map.

BellBot GUI

Among the different services and tasks the robot may offer, the following ones may be mentioned:

In order to meet these requirements, the robot will have the following basic features:

Return to index

Application architecture

According to the RIDE development environment, the outline of the application modules are shown in the first figure.

Arquitectura de control multirrobot

As you can see, it's a centralized system. Some modules such as JCentral, TaskManager and Domotic typically run in the mainframe. Different customers will use the two aforementioned GUIs that will connect to JCENTRAL through the building LAN. Robots will be also connected to the module via WiFi.

A similar scheme, but using IPC instead of JIPC, is used for each robot control architecture, as described in the second figure.

Arquitectura de control del robot

RobotWeb module serves as a gateway for messages exchange between the IPC and JIPC.

Return to index

Robot design

This robot was designed exclusively for this application. In the figures we can see the different parts.

The base is the place for the battery and the synchronous drive system that allows complete mobility for the different corridors, rooms and other spaces of the hotel. In turns all four wheels and the top of the robot move while the rest of the base stays static. Each of this four wheels has a slightly smaller wheel in parallel, to avoid getting stuck in cracks such as the gap between the elevator and the floor. The housing of the base is made by removable doors which have touch sensors in their anchors so that when the robot encounters an obstacle, this can be detected. Just above the base there is a laser scanning which allows the construction of maps and location of the robot in the map. Laser readings, along with the sonar ring readings, are also used to detect and avoid obstacles that may be encountered by the robot. At the top of the base there is a drawer that opens and closes automatically and within it the robot can carry small items such as newspapers, drinks or snacks. This drawer will only open in those cases in which someone should place something inside (like in the bar) or have to remove something from it (like in the rooms). In the back you can place small objects like small travel bags, while accompanying the guests to their rooms.

The top represents the torso, arms and head and its function is a friendly interaction with guests providing empathy with its movements. That is, customer acceptance is sought through something more pleasant than a simple base with a drawer. Each arm has two degrees of freedom, one in the elbow and another in the shoulder, and also has touch sensors in the hand, which can detect if someone touches or grasps its hand. With these capabilities the robot can perform actions such as shaking hands and greeting. The torso has a touch screen that allows the customer to perform selections, etc. Finally the head includes an array of LEDs in the mouth, eyes with adjustable backlight, eyelids that open and close and a couple of motors that grant two degrees of freedom in the neck.

Return to index

Project settings

As with the surveillance application, the goal of this work is not limited to the creation of a single project but to create a generic application with tools for building various projects easily by people with little knowledge in programming. This is possible thanks to a graphical interface for project configuration. The whole system can be installed and programmed in a short period of time with a graphical tool (BellBot Editor) that allows to easily define the parameters of the project. This program will get a unique configuration file for the entire system, thus avoiding possible inconsistencies in the information handled by different modules. In general, the steps for creating a project are:

Inside RIDE control architecture, the modules that use the configuration file obtained are:

As you can see, the goal is to provide the necessary tools for the installer to create and install a new project without having to program new code.

Return to index

The robot in action

To get an idea on how the system works let's take a look of the execution of a typical task. So far, the first tests are being carried out in the System Engineering and Automation Department where teachers offices serve as "Rooms", the entrance to the Department serves as the "Front desk" and the Vision laboratory serves as the "bar". Therefore we use the Department Environment as shown in the first figure.

TASK: A guest from your room labeled 4 in the first figure orders a drink (soda) from the bar (ie, the vision laboratory).

First, the guest makes the request from a terminal in his room through an interface like the one shown in the second figure.

BellBot GUI

Let's see this example with a typical task. In the following video (video 1) we can see how the user requests the drink through the graphical interface.

Then the central system receives this request and assigns the task to a robot that is ready or enqueues the task if there was no robot available. Once a robot is available it will start to perform the task. Let's look at the following videos (videos 2 and 3).

While the robot is waiting it can have sensors active to detect a person and greet, etc. The following video (video 4) shows some examples and how the robot can respond differently depending on the situation. For example, if the robot introduces itself and move the hand forward to us we can shake hands, but if you touch it without previous robot greeting it will cause rejection. All these behaviors can be programmed easily using the RoboGraph tool.

Finally, in the video 5 we can see BellBot avoiding obstacles along the Department corridors in one of its missions.

This work has been partially supported by the Spanish Ministry of Science and Innovation, (Project DPI2008-06738-C02-02).

Return to index