Borrelly, E. Coste-Maniere, B. Espiau, K. Kapelos, R. Pissard-Gibollet, D. Simon, N. Kuipers: The spatial semantic hierarchy, Artif. Orfali, D. Simmons, G. Firby, M. Verma, T. Estlin, A. Pasareanu, R. Simmons, K. Schneider, V. Chen, G. Pardo-Castellote, H. Agre, D. Chapman: Pengi: An implementation of a theory of activity, Proc. Thesis Carnegie Mellon Univ.
Arkin: Motor schema-based mobile robot navigation, Int. Arbib: Schema Theory. In: Encyclopedia of Artificial Intelligence , ed. Shapiro Wiley, New York pp.
Khatib: Real-time obstacle avoidance for manipulators and mobile robots, Proc. Arkin: Integrating behavioral, perceptual, and world knowledge in reactive navigation, Robot. Arkin, T. Thesis Yale Univ. Bonasso: Integrating Reaction Plans and layered competences through synchronous control, Proc.
Rosenschein, L. Kaelbling: The synthesis of digital machines with provable epistemic properties, Proc. Kaelbling: Goals as parallel program specifications, Proc. Bonasso, R. Firby, E. Gat, D. Kortenkamp, D. Miller, M. Slack: Experiences with an architecture for intelligent, reactive agents, J. Gat: Integrating Planning and reacting in a heterogeneous asynchronous architecture for controlling real-world mobile robots, Proc. Saridis: Architectures for Intelligent Controls.
Alami, R. Chatila, S. Fleury, M. Ghallab, F. Ingrand: An architecture for autonomy, Int. Ghallab, H. Laborie, M. Ghallab: Planning with sharable resource constraints, Proc. The YARP middleware has been chosen to obtain an abstraction layer for the hardware of the robot sensors and motors together with the set of interfaces. This abstraction layer allows to write code that can seamlessly interface to simulators or to the real robot either remotely through the network or on the same machine using inter process communication.
The initial phase of the development focused on the implementation of this abstraction layer for the simulated robot in Gazebo Hoffman et al. This allowed to start performing experiments early on during the project.
The same interface was implemented on the real robot allowing to transfer the code developed on the simulator with only minimal parameters changes. The Walk-Man architecture has been organized into four software layers see Figure 3. At the lowest level, each joint of Walk-Man is controlled by a PID position loop in a distributed embedded electronic system with one board per joint.
Our main aim was to have a hard real-time loop in the firmware: the execution time of each firmware function was measured and tuned so that a 1 kHz loop could be implemented. In the software architecture, we developed for the Coman platform, the communication from the control pc to each board was performed on an ethernet BUS using a combination of UDP and TCP packets.
The lack of synchronization between boards led to conflicts and consequent packets loss with UDP and delays with TCP protocols. We decided to move to an ethercat implementation, which allows synchronized communication and, therefore, much better control on the data traveling on the BUS.
In particular, we measured the maximum number of bytes that each board could handle at 1 kHz and defined a standard packet size with standard information, as shown in Figure 4. This standard packet definition is an example of the various interfaces between software levels that will be described in this work and that allow software decoupling and testing.
In the real robot, the hardware manager runs on the control pc and is called Ethercat Master. It manages Ethercat slaves i. The Master can be seen as a hardware robot driver, which handles low level communication and exposes a simpler and asynchronous API to the higher levels. Writing real-time code requires expertise that were not available in all the software developers.
Therefore, a separation between the Master and the modules implementing higher-level behaviors has been introduced. This separation was achieved through the YARP middleware using the remotization functionalities that it provides for the robot abstraction layer. Usually, developers write software that communicates with the Master through the network; this has been achieved using asynchronous communication with the YARP middleware. This decoupling was beneficial because it allows stopping and starting modules without interfering with the Master.
More importantly, it prevents modules that behave erratically to affect the real-time performance of the Master. The Master creates an input and output YARP port for each control module and for each type of information required by them.
In Figure 5 , the modules running during the DRC driving task are reported together with communication frequencies. Figure 5. As an example, during the DRC driving task three modules were using joint readings at , 10, and 1 Hz, one required torques and temperatures at 10 Hz, two were controlling the joints at and 10 Hz. An additional set of libraries, named WholeBodyInterface , hides YARP channels from control modules, and relieves the developers from the bureaucracy required to prepare and parse the messages to and from the robot.
This abstraction layer between the hardware driver and the control modules allowed us to easily switch between simulation and the real robot, since the Gazebo plug-ins for the Walk-Man robot implements exactly the same YARP classes and interfaces as the Master see Figure 6.
Figure 6. Detailed low level software stack, including robot hardware abstractions and a whole-body class used by the Generic Yarp Module. In the simulation case, the Gazebo Plugin substitutes the HAL standalone application and it is fully compatible with the same set of shared libraries.
The two-tier abstraction layer implements a whole-body interface on top of the robot interface defined by YARP. The main difference between the two layers is that the latter separates joints in kinematic chains and implements interfaces for individual sensors; for practical reasons, the logical separation of the kinematic chains at this level is subject to fluctuations for example, it affects how joint states are broadcast on the network.
The whole-body interface groups all joints and associated sensors in a single kinematic chain. The advantage of this separation is that it exposes to the user the whole-body interface, which is stable because it is defined solely by the number of joints of the robot.
As an extreme example, 15 days before the DRC, we had to intentionally break the functions responsible for moving the robot joints. To reduce resource usage and reduce jitter due to CPU overload , we changed how joints are grouped and transmitted on the network; all the required changes affected the YARP abstraction layer and remain limited to the implementation of the whole-body interface. All the user code remained untouched. The simulation, the real robot, and all the control modules were updated in just 2 days.
As suggested by Johnson et al. However, we are convinced that, in a research environment APIs may need to be changed in critical moments, and the proposed approach is a way to mitigate the effect of such changes. Finally, an advantage of this two-layer architecture is that it separates control modules from the middleware. This will allow to change the communication layer i. A control module software can be summarized as a sense - compute - move loop, where sense receives all the inputs from the robot, the inputs are used by compute in order to implement the control law of the module.
Finally, move sends to the robot the newly computed desired position of the joints. In reality, developers usually spend a part of development effort into initialization code: i. GYM was iteratively improved driven by the effort to remove duplicated code across modules and based on the team feedback 10 developers which helped revising the specifications and debugging. Our experience showed that the adoption of GYM reduced duplicated code significantly.
In addition GYM provides another separation between the code and the middleware. GYM is organized in two threads: a watchdog running at 1 Hz and a main control loop running in a range of frequencies between and Hz Figure 7.
Developers can write their own code inside the control loop function run , they also have access to a set of helper function providing a standard kinematic description of the robot based on the robot URDF. The watchdog thread is not customizable and listens for standard commands from the pilot, through one of the standard communication interfaces switch interface described in the next section.
Figure 7. Notice that the user can override the default empty implementation of pause and resume functions so that he can take the required actions in order to save and resume the state of his own control module. Instead, to keep different modules organized in a similar structure, the init function was required to be implemented by the user and to contain all the initialization code.
Moreover, with this approach, executables could be started in any moment, while the pilot kept the possibility of choosing when a module was going to be initialized and connected to the rest of the running software. In order to show some of the GYM library functions, we report a simple run implementation:. The variable robot is provided by GYM and is used to interact with the hardware with simple functions such as sensePosition and move.
Examples of what a complex implementation may do is to use multiple state machines depending on the cmd values, to read or ignore commands from the user, to selectively avoid sensing or moving the robot while planning a complex movement, or even to evolve a state machine automatically without requiring user commands.
One of the features implemented in GYM code is a set of communication interfaces between the module and the pilot: Command, Status, Warning, and Switch. These interfaces in their default implementation send through the network an array of characters; the Command and Status interfaces support the addition of a custom data serializer that can be implemented by the user in order to send any type of data.
The Warning Interface is an advanced interface that can be used in dangerous situations e. The main differences between this interface and the Status are the priority of the data in the communication between control pc and pilot pc, and the different visualization in the pilotInterface, where Warning messages are red see Section 2.
The Switch Interface is used to send the following commands to each module: start, pause, resume, stop, and quit. Since some of these commands are critical, they cannot be overridden with different implementations: modules are allowed to re-implement only pause and resume functions.
This approach guarantees that any bug or misbehavior of the code running inside a GYM does not propagate to the whole system, since a module can always be forced to stop by the pilot with a stop command. Note that, differently from pause, the stop command does not activate any soft exiting procedure. For example, trying to stop the walking module while the robot is dynamically walking may result in a fall: if the pilot wants to stop the robot from walking and avoid falling, he should send the pause command to the current walking module, which in turn, depending on the robot status, should either stop immediately double stance phase or finish the current step phase and put both feet on the ground.
Manipulation modules are safer in this sense since the robot is usually stable when moving its arms, nevertheless, a pause procedure should still be implemented as it allows the module to save its internal state and resume it later.
Thus, the stop is used to quit a module when it is no longer needed, or to force-quit a module that is not controlling the robot but could be stuck in a loop due to bugs. The behavior of the GYM state machine is reported in Figure 8. Except for the special states Constructor and Destructor , there are three available states.
The unique state accessible from the Constructor is Running through the start command. From this state, the module can be put into Paused state using the pause command or stopped i.
From the Paused state, the module can be switched to the Running one by the resume command or can be stopped. Once the module is in the Stopped state, it can be only started i. The Destructor state is accessible from every state sending the quit command. Changes of state are triggered by the watchdog thread in response to a message from the Switch Interface.
In the Running state, the internal control module loop is executed, the robot can receive the commands and send the state. The Paused state is used to freeze the internal control module loop so that, once resumed, the last command is executed.
In the Stopped state, the internal control module loop is exited, as the program is closed, but it can be restarted again using the Switch Interface. A generic representation of a control module using the GYM template together with the related widget is depicted in Figure 9.
Thanks to the GYM classes and functions, our team managed to focus on the core development of each DRC task in very short time e. It is worth noting that, although the perception module is not a proper control module, since it does not send references to the robot joints, it has been developed using the GYM template. We will now describe the main components of a GYM Module, using the module designed for the driving task as an example.
For example, the state machine for the driving module is shown in Figure Global localization is handled by a Kalman Filter that fuses local odometry from the motors and global position estimates from the Marvelmind sensors. The Kalman Filter itself is very simple as it only maintains the 3x3 position estimate but there is a little bit of extra work that goes into processing the position update step from the Marvelmind.
Since the Marvelmind readings can lag when the robot is moving, the estimated variance of the measurement is scaled with the velocity of the robot. This is not an accurate model as the variance is not truly Gaussian, but it works well enough. The Position Control Mode handles most of the driving modes for the robot. It generates a point to point trajectory more info on how the trajectory generation works and then uses simple PID controllers along with feedback from the localization system to generate velocity commands for each axis that are then sent to the motor controller.
The Vision Control Mode is used for all motions where the markers on the floor need to be tracked. It does not use the global localization system and instead only considers the relative displacement of the markers from their target. In practice, this is still handled by computing an 'estimated pose' of the robot so that the trajectory generation system can work in the same fashion, but in essence it is just driving the markers in the image to a target location.
This control mode also handles its own 'localization' using a second Kalman Filter that combines the wheel odometry with the 'position estimates' provided by the CameraTracker. Similarly to the Position Control Mode, it uses simple PID controllers to generate velocity commands for each axis to send to the motor controller. The Motor Driver runs on a Teknic ClearCore which was generously provided by Teknic for the project along with the drive motors for the base.
This board functions very much like an 'Industrial Arduino' so much so that Teknic actually provides an Arduino-compatible library so it can be programmed directly from the Arduino IDE. This board is responsible for all of the real time driving of the motors drive motors and the lifter motor , the servo that drops the dominos, the optical endstop for the lifter, and the buttons to operate the lifter manually.
Communication with the Raspberry Pi that is specific to the lifter gets routed to the lifter action handler which has simple state machines for operating the servo, moving the lifter to a specific height, and homing the lifter.
Velocity commands for the base are routed to the robot velocity handler which takes in a cartesian velocity in the local robot frame, converts it to wheel speeds, and then updates the commanded velocity for each motor. It also returns an odometry estimate from the motor, but due to how the ClearPath motors work, this isn't actually a true encoder reading from the motors.
However, the motors themselves are extremely accurate and follow the velocity commands extremely well so not reading directly from an encoder is not really a huge problem. That is a high level overview of how the software for the Domino Robot works. If you are interested in looking at the actual code, you can find it all on Github.
Domino Robot Software Architecture. View fullsize. The application layer for a painting system will be all about spray rates, nozzles and speed. The robot in the next work cell might be dispensing adhesive so will have a different graphical user interface even if the software is identical to the painting work cell.
Each application has a common base architecture somewhat like the operating system for personal computers which gives a computer a level of functionality. The software then verges off to the application area, such as Excel, Word or Outlook.
Robot manufacturers drive to have the function of the robot match the application and make the software architecture seamless. Wixom, Michigan. Robots with increased dexterity and those with unique link and joint configurations meet specific application needs for wafer-handling, car body transfer, non-destructive testing, confined area welding and palletizing. Tool changers have control modules. The first tool changer we designed used DeviceNet, the standard for several companies.
DeviceNet evolved over the last 15 years but companies have moved towards Ethernet or Profinet.
0コメント