Quantcast
Channel: ROS Answers: Open Source Q&A Forum - RSS feed
Viewing all 53 articles
Browse latest View live

Phidgets interface kit 8/8/8 with EZ1 sonar sensor and ROS Hydro

$
0
0
Hello, I implemented a c++ code with phidgets interface kit and the ez1 sonar sensor to be used as collision avoidance on a robot. But I'm having trouble creating the package or somehow building my code with ROS hydro. So far I used catkin to build the examples for the tutorial as well as building this package : http://wiki.ros.org/phidgets_drivers with success but as far as I understood the package is only for phidgets_ir and phidgets_imu. Should I try to adjust the package mentioned above or look for something more specific? Thanks.

Fusing 2 sonars / pingers as continous sensors or 'GPS'

$
0
0
Picking up where I left [ekf_localization-node-not-responding](http://answers.ros.org/question/197404/ekf_localization-node-not-responding/), I am trying to fuse position in x and y from 2 sonars (Tritech Micron MK III) with (double integrated accelerometer) position from an IMU (xsens MTi 30). This happens inside a rectangular pool, so each sonar - will track a side wall and measure the absolute distance to it (one sonar looking at the short wall, the other at the long wall), - we have a distance and a heading attached to it - and report x and/or y position either to use as - absolute position with x axis || long side of pool, y || short side of pool - or relative position to the starting point (differential_integration == true) (The point being trying to get a quicker update rate compared to doing machine vision on a full scan) The problem now is: **How can I account for the sonar's offset from base_link** since **the measurements aren't really in the body frame?** I could either 1. report the data with frame_id `odom` or `map`, but how could I account for the mounting offset especially when the sub is not aligned with the side walls. 1. would that be possible with a single instance of robot_localization, or should then tread them as 'GPS'? 2. report the data in a `Sonar1` and `Sonar2` frame and use the current heading of the sonar head (plus the fixed translation) to publish a transform from `Sonar1` to `base_link` - This means that I will have to use the attitude solution to track the wall, but that's outside the scope of robot_localization. I believe the latter is the better approach. Does this make sense or how could it be improved? Thanks, Rapha ---------- Wrap up on how to fuse the sonars --------------------------------- I am still working through implementing some of these steps, but I will document the process here. What are the frames and transforms needed for fusing the sonars: * Frames: * `odom` * `pool` * `sonar1` (will be fused for x only) * `sonar2` (will be fused for y only) * `base_link` * `imu` * Transforms: * `odom->base_link` (generated by ekf_localization) * `odom->pool` (static transform, accounting for rotational offset of pool walls/axes with ENU) * `pool->sonar1` (broadcasts a translation of the sonar mounting position relative to `base_link` rotated/trigonometried by the yaw angle of `pool->base_link` transform) * `pool->sonar2` * `base_link->imu` * Data: * /sonar/position/x in frame `sonar1` * /sonar/position/y in frame `sonar2` * /imu/data in frame `imu` * /svs/depth in frame `svs` WORK IN PROGRESS ---------- EDIT (regarding 1st edit in Answer): I have set it up the way you recommended and I publish a transform from `odom-->sonar` every time I publish a new sonar reading (just before). The rotational part of that transform uses the current attitude as published by the robot_localization node. Robot_localization sees the following input at the moment: - attitude from the IMU (imu/data) - position from the sonars (sonar/position0 and sonar/position1) However, after a few seconds the robot_localization node slows down to about 1 update every 5 seconds... **EDIT:** By 'slows down' I mean the message broadcast rate slows down (rostopic hz /odometry/filtered) I wasn't sure if using the odom attitude was the right thing to use, so I tried it with the direct attitude solution from the imu. The robot_localization node still slows down. My [Launchfile](https://github.com/Low-Cost-UW-Manipulation-iAUV/big_red_launch_configs/blob/thrusters_urdf/launch/ekf_localization.launch) is below and I've uploaded a rosbag [here](https://dl.dropboxusercontent.com/u/13645240/2015-01-19-15-00-43.bag). I tried to create a debug file, but no file was created anywhere. Any ideas? Thank you so much, Rapha ---- EDIT 2: I am running branch master 3c3da5a537 ---- EDIT 3: I have investigated a little further. You said it might be to do with missing/broken transforms. When initially posting this question I was publishing the transforms (odom->sonar) upon every execution of the sonar processing node (whenever a raw message from the sonar arrived) --> immediately before publishing the sonar position (in frame sonar) itself. I wasn't sure how ROS handles timestamps and if robot_localization would receive and process the transform before the position update. So I decided to publish the transform whenever the IMU attitude gets published. This happens at 400 Hz and is thus much more often than then sonar position (25-30Hz). This improved the problem: Instead of slowing down virtually instantly it now runs for a while before slowing down and eventually stopping altogether. Also if I now stop the ekf_localization node and restart it, it will start running again for about ~45s. Stopping the process by ctrl+c takes about 10s to do. When the node is running normally, this is virtually instant. Sometimes (1/5 tries perhaps) the ekf_localization complains: at line 240 in /tmp/buildd/ros-indigo-tf2-0.5.6-0trusty-20141014-0329/src/buffer_core.cpp Error: TF_NAN_INPUT: Ignoring transform for child_frame_id "base_link" from authority "unknown_publisher" because of a nan value in the transform (-nan -nan -nan) (-nan -nan -nan -nan) I believe there might be a connection with sometimes erratic data from the sonars, but I can't quite pinpoint it. I'll try improving their accuracy in the meantime. EDIT 4: After changing the covariance matrix robot_localization does not crash any more. But... I have looked at more data sources working together and I came across another problem. This very much relates to the initial question: [ekf_localization-node-not-responding](http://answers.ros.org/question/197404/ekf_localization-node-not-responding/). I am fusing the absolute orientation from the IMU differentially. However, when I am looking at the attitude solution generated (and I've done this for the 1st time in degrees... now) the value does not track the change in attitude. I.e. a 90° change in yaw gets registered as ~3 to 9° and its quite sluggish. I've tried this with only the IMU publishing data and robot_localization running and without robot_localization. Would you mind looking to the attached bag files? The files show a -90° rotation about yaw followed by +90 ° about yaw. (Ending up in same orientation) [IMU and robot_localization](https://dl.dropboxusercontent.com/u/13645240/2015-02-01-00-06-39.bag) [IMU by itself](https://dl.dropboxusercontent.com/u/13645240/2015-02-01-00-28-07.bag) - The [launchfile](https://github.com/Low-Cost-UW-Manipulation-iAUV/big_red_launch_configs/blob/thrusters_urdf/launch/ekf_localization.launch) below is still up to date - the IMU publishes in the `xsens` frame and (as discussed in the other thread) - there is a static transform `base_link->xsens` to account for the mounting. Thank you very much for your help, Rapha ---- **EDIT 5:** Since fixing the covariance matrix robot_localization hasn't crashed anymore. I am a little bit confused as of how the depth sensor (aka absolute z in odom) and the sonars (absolute x,y in odom) get fused. As per your suggestion I am publishing a transform with the offset of the sensor mounts to base_link and the current attitude solution from robot_localization as the rotation. (I hope I understood you correctly here). I have set ''differential integration = true'' for sonars (x,y in odom), but I don't want to do this for the depth sensor (z in odom). When only fusing the depth sensor (and only using z in the ekf) I also get small values in the fused x,y. To me this odd, as I thought only the even z measure gets fused, but I can kind of make sense of it: The ekf fuses the untransformed z, then transforms it to base_link which produces x,y distances. *My question is:* If I'm trying to fuse the x, y data ''differentially'' integrated, won't the fused x, y position be locked to the value generated from fusing the z measurements? I could differentially fuse it, which should solve this, but the depth sensor does make for a very nice absolute measurement. ---- **EDIT 5b:** [2015-02-11-01-37-30.bag](https://www.dropbox.com/s/7wyfxequt858u9z/2015-02-11-01-37-30.bag?dl=0) *I updated the ekf [launchfile](https://github.com/Low-Cost-UW-Manipulation-iAUV/big_red_launch_configs/blob/thrusters_urdf/launch/ekf_localization.launch) below, but the rosbag is without the ekf.* I can currently only move along heave (z) (and due to issues with the buoyancy not that much): - We started off slightly submerged, - went down along negative heave / z (no change in attitude intended) - and up to the surface. **Z** aka`/svs/depth` represents it exactly. **X** aka `sonar/positionBMT` and **Y** aka `sonar/positionUWE` should not have changed, but there is a jump in **X** when surfacing (from ~4m to ~2m). Thank you very much for your help. ---- **@EDIT 6:** Below is the diagram. For the time being the sub's orientation is fixed in relation to the pool walls, but that will change. The sonars are each tracking one of the pool walls. I also realised 2 things I am doing wrong: - I assumed my `odom` frame is aligned with ENU as well as the pool. Since the pool isn't pointing north this is not true. - The sonars measure distance in relation to the long and short side of the pool. - I therefore need a `pool` frame that is rotated from `odom`, but shares the same origin. - The `odom->sonarBMT/sonarUWE/SVS` transforms I am publishing are wrong I believe: - I think I misunderstood Toms instructions in the initial answer. At the moment my transform looks like this: - *Rotational component has the rotation of* `odom->base_link` - *Translational component has the distance of `base_link` to `sonarBMT/sonarUWE/SVS` measured in `base_link`* - I think it should be the a pure translational transform, where the translation is computed from the current robot heading (aka the rotation in `pool->base_link`) and the mounting position offset in `base_link`. ![Pool Experiment V1](https://www.dropbox.com/s/3yvqjsrfc73uyb7/Pool_Experiment_Setup.png?dl=1) ---- **EDIT 6b:** I have these new frames and transformations running and the `pool->SONAR_UWE` as well as `pool->SONAR_BMT` transforms look good. The Depth data seems to get fused correctly, but neither does the X or Y position from the sonars. Might the problem be that my X and Y data coming from the sonars is fused separately? After being rotated, each individual measurement is not 'just' x or y. Once in `odom` it has a non-zero x or y component respectively and is then fused absolute... I might be totally misunderstanding part of the process here though. **EDIT 6c:** 6b might have some truth to it. My `SONAR_BMT`aka the x axis (pool frame) sonar wasn't outputing data. While that is the case `rosrun tf tf_echo pool base_link` returns a fitting solution and so does `rosrun tf tf_echo odom base_link`. As soon as the sonar outputs data again the ekf solution outputs rubbish again. I will try to put the sonar measurements together in one message and try to fuse that. ---- [Launchfile on Github](https://github.com/Low-Cost-UW-Manipulation-iAUV/big_red_launch_configs/blob/thrusters_urdf/launch/ekf_localization.launch)

Viewing p2os sonar in RViz

$
0
0
I would like to get the sonar sensors for my Pioneer P3-DX robot up and running. I am utilizing ROS Hydro on Ubuntu 12.04 on both a base laptop and a PC on the P3-DX. I've gone into [p2os/p2os_driver/src/p2os.cc](https://github.com/allenh1/p2os/blob/cd47a7da1fa7773bc66c6f5cca42be9df6d78740/p2os_driver/src/p2os.cc) and changed the parameter `use_sonar` to true in order to get the sonar sensors firing and the `p2os_msgs/SonarArray` messages published to the `/sonar` topic. However, because p2os_msgs/SonarArray is not a standard `sensor_msg`, I am unable to view the sonar data in RViz or use it for navigation. Any thoughts? Thanks in advance.

Clearing and marking obstacles using range_sensor_layer

$
0
0
Hi all, So, I have been using [range_sensor_layer](http://wiki.ros.org/range_sensor_layer) to insert obstacles detected by the sonar sensors into the costmap. It seems to be working pretty well but I have couple of concerns: 1. If there are no obstacles in front of the sensor, why does it insert obstacles at the maximum range of that sonar? For example, if the maximum range is 3m, although there are no obstacles in front of the sensor, still it will insert an obstacle at 3m from the sensor. 2. This is more of a doubt. I added a parameter to specify the range so that obstacles only within that range will be inserted into the costmap. Whenever, there are no obstacles in front of the sonar, I do `clear_sensor_cone = true` which clears the entire costmap but what I want is that obstacles which have already been inserted into the costmap should not be cleared unless sonar tells that the obstacle is not there anymore. What is the best way to do something like this? Thanks in advance Naman Kumar

range_sensor_layer can't transform from map to /us1

$
0
0
Hi all, I am using Ultrasound sensors and Hokuyo laser for navigation. I am using [range_sensor_layer](http://wiki.ros.org/range_sensor_layer) for ultrasound sensors. The problem is when I publish the `sensor_msgs/Range` message and use the range_sensor_layer, I get the following error: [ERROR] [1431956010.233195664]: Range sensor layer can't transform from map to /us1 at 1431956010.107522 I am using `static_transform_publisher` for the transformation between `/base_link` and `/us1` : I am not able to see why am I getting this error as the **tf tree** is properly set up and there is a transformation from map to /us1: ![image description](/upfiles/14319563319307995.png) **global_costmap_params.yaml** plugins: - {name: static_layer, type: "costmap_2d::StaticLayer"} - {name: obstacle_layer, type: "costmap_2d::ObstacleLayer"} - {name: sonar_layer, type: "range_sensor_layer::RangeSensorLayer"} - {name: inflation_layer, type: "costmap_2d::InflationLayer"} #Configuration for the sensors that the costmap will use to update a map obstacle_layer: observation_sources: laser_scan_sensor laser_scan_sensor: {data_type: LaserScan, sensor_frame: /laser, topic: /scan, expected_update_rate: 0.4, observation_persistence: 0.0, marking: true, clearing: true, max_obstacle_height: 2.0, min_obstacle_height: 0.0, inf_is_valid: true} sonar_layer: topics: ['/US1'] no_readings_timeout: 1.0 clear_threshold: 0.2 mark_threshold: 0.8 clear_on_max_reading: true **local_costmap_params.yaml** plugins: - {name: obstacle_layer, type: "costmap_2d::ObstacleLayer"} - {name: sonar_layer, type: "range_sensor_layer::RangeSensorLayer"} - {name: inflation_layer, type: "costmap_2d::InflationLayer"} #Configuration for the sensors that the costmap will use to update a map obstacle_layer: observation_sources: laser_scan_sensor laser_scan_sensor: {data_type: LaserScan, sensor_frame: /laser, topic: /scan, expected_update_rate: 0.4, observation_persistence: 0.0, marking: true, clearing: true, max_obstacle_height: 2.0, min_obstacle_height: 0.0, inf_is_valid: true} sonar_layer: topics: ['/US1'] no_readings_timeout: 1.0 clear_threshold: 0.2 mark_threshold: 0.8 clear_on_max_reading: true Does anyone have any idea what is going wrong here? Please let me know if you need more information from me. Any help will be appreciated. Thanks a lot. Naman Kumar

a cheap Laser / IR solution for 2D mapping?

$
0
0
Hi guys, does anyone know of a relatively cheap sensor solution for 2D mapping? don't need it to be (laser) accurate. just good enough to build a 2D map for some basic indoor navigation job. Sonar, IR, anything that might do the job. **UPDATE:** i also found [LiDAR-Lite](http://pulsedlight3d.com) i guess its fairly new to the marker. it is also very cheap but it should be modified to cover 360 degree around it. anyone tried it before? Thanks for the ideas n answers.

AR.Drone 2.0 altitude from navdata is showing wrong value

$
0
0
I am having a problem. This issue is very important to me. If anyone helps me in this, i can complete my project. The problem is: When my ardrone is flying, I will be able to get altitude value from /ardrone/navdata/altd and then when the ardrone moves and when we find it above a table of height x and then normally the altitude of the drone should be previous - x, but this is not happening. Why?? Example : when its flying, let's say the altitude be 800 mm and then, table height be 300 mm so when the ardrone is above the table, then altitude should be 800-300 = 500mm but it is not working such a way.

Using sonar sensor in Gazebo for Pioneer3dx Model

$
0
0
Hello, I am using the pioneer3dx model in Gazebo from https://github.com/SD-Robot-Vision/PioneerModel.git and I couldn't find the robot publishing any sonar data. When I type rostopic list while the robot is moving in simulation, the topics that print out are: /clock /cmd_vel /gazebo/link_states /gazebo/model_states /gazebo/parameter_descriptions /gazebo/parameter_updates /gazebo/set_link_state /gazebo/set_model_state /odom /p3dx/base_pose_ground_truth /p3dx/laser/scan /rosout /rosout_agg /statistics /tf But when I look through each topic, I do not find any sonar data. I was wondering if I had somehow missed the sonar data, or the topic publishing the sonar? And if I didn't, how do I add sonar sensors to this model? Or is there already a model with the sonar sensors? Thank you so much!

How to use the Sonars from a Pioneer 3at for navigation (p2os)

$
0
0
Hello, I'm new to ROS so please be kind to me :) I'm using a Pioneer-3at with the p2os package, the 2Dnavigation with the Hokuyo-laser-scanner works also fine. Now i want to include the Sonar Sensor of the Robot to detect obstacles, which can't be detected by the laser scanner. Can anyone help me to use the p2os_driver/SonarArray message for the navigation or how i can convert it to a sensor_msgs/Range.msg message ?

Replacing Kinect with Sonars

$
0
0
I'm wanting to replace the Kinect sensor with sonars for obstruction detection. Kinect has a hard time detecting glass and working in sunshine, which makes it a pretty bad option in the building I'm trying to use it. Currently, I have five sonars attached to the front of the TurtleBot. They detect how far away an object is from them and translate that distance reading into an analog voltage, which is converted to digital by an Arduino before being sent to my computer. How might I use this data to mimic the data output of the Kinect sensor?

What frame to use when adding range_sensor_layer (sonar) to costmap

$
0
0
I am trying to add a sonar layer into my costmap to detect obstacles. I have used the range_sensor_layer (which is a costmap plugin) for this purpose. I have added the sonar layer parameters properly in global_costmap_params, local_costmap_params and costmap_common_params. Now, when I run Rviz, the sonar layer is not showing up. When I try to add it manually by selecting the relevant topic (/sonar), it shows an error about Frame. It says that the frame does not exist. I have seen the message from laser scan which is using frame_id : camer_depth_image. I want to know what frame_id should I use in the sonar's message which should already exist and upon which the sonar message will be broadcast. (I am new to frames in ros). Thanks

is there a package for a two motor boat / catamaran? or does the hydro-kingfisher package work on indigo?

$
0
0
I am building a boat/ catamaran that has a camera, GPS, and a sonar. it navigates using two fixed motors. can I simulate this in gazebo? what is the best way to build the interface for this robot??

how can i use costmap_2d with sonar sensor

$
0
0
i am working on an obstacle avoiding robot which should build a map i am practically using a sonar sensor to avoid the obstacles can someone help me on how to build a map based on a sonar sensor data instead of a laser sensor. the .yaml file i am using is as follow it uses a laser sensor how can i convert it to sonar sensor `#Sample parameters footprint: [[0.1, 0.0], [0.0, 0.1], [0.0, -0.1], [-0.1, 0.0]] #robot_radius: 0.10 transform_tolerance: 0.3 update_frequency: 5.0 publish_frequency: 2.0 global_frame: robot_0/odom robot_base_frame: robot_0/base_link resolution: 0.02 width: 20.0 height: 20.0 rolling_window: true track_unknown_space: true plugins: - {name: explore_boundary, type: "frontier_exploration::BoundedExploreLayer"} - {name: sensor, type: "costmap_2d::ObstacleLayer"} - {name: inflation, type: "costmap_2d::InflationLayer"} explore_boundary: resize_to_boundary: true frontier_travel_point: closest sensor: observation_sources: laser laser: {data_type: LaserScan, clearing: true, marking: true, topic: /robot_0/base_scan, inf_is_valid: true, raytrace_range: 1.5, obstacle_range: 0.9} inflation: inflation_radius: 0.15` plz help me thank you.

how to access rosaria/sonar pointcloud data

$
0
0
When i used rostopic echo /RosAria/sonar it showed points: which showed x,y,z,data I want to access these x,y,z data with a cpp file. i'm trying msg->points.data()

Can I localize with imprecise/noisy sonar sensor?

$
0
0
I have a small differential drive robot with a single sonar sensor mounted on the front of it. I also have an IMU and GPS on the vehicle. My goal is to navigate from one point to another in an unknown environment and avoid obstacles along the way. At the moment we're testing indoors, so the GPS is out of the picture. I want to be able to send a waypoint via rviz (simulating a GPS coordinate waypoint) and have the vehicle navigate there. I don't believe mapping is necessary for my application, however I think localization is - or else how would it know it arrived at its destination? I plan on using [robot localization](http://wiki.ros.org/robot_localization) to combine my IMU and odometry sensors and create a pose estimate. But won't that estimate drift over time? Does ROS have any packages that will enable my robot to correct for odometry drift and adjust itself in the map frame, using sonar data? What I've seen used so far is support for LIDAR and LaserScan messages along with gmapping or hector_slam, but I don't have that hardware.

range_sensor_layer marks max sonar range as obstacle?

$
0
0
I'm using Gazebo to simulate a 4-wheeled differential drive robot. The robot has a forward sonar sensor, so I added a [simulated sonar sensor](http://wiki.ros.org/hector_gazebo_plugins#GazeboRosSonar). The sonar sensor appears to work; it detects obstacles and the Range messages look correct (e.g. max range value when no obstacles). The visualization in Gazebo also looks correct. I use the [range_sensor_layer package](http://wiki.ros.org/range_sensor_layer) as a plugin for `costmap_2d`. My issue is that for some reason, when there is no obstacle and the sonar sensor is at max range, the cost map registers an obstacle. Below is a screenshot of Gazebo (left), Rviz (top right), and the echo'd Range message (bottom right). I rotated the vehicle in a circle without any obstacles, yet the costmap shows that the robot is surrounded by obstacles. ![image description](/upfiles/14671395403374078.png) Now there's a parameter in `range_sensor_layer` called `clear_on_max_reading`, which will remove obstacles when a reading is max. However I've found that this does more harm than good, because it will clear away actual obstacles by accident. For example, when it's navigating it runs along the wall and starts creating a wall of obstacles. Eventually it decides to turn, and as the range value maxes out, it clears a whole chunk of actual obstacle. Now there's a hole in the wall, so it tries to navigate towards the hole, and relearns that it's indeed an obstacle. This **repeats forever**. It's both funny and infuriating. Here are the YAML files I'm using for my costmap: ### costmap_common_params.yaml map_type: costmap origin_z: 0.0 z_resolution: 1 z_voxels: 2 obstacle_range: 0.5 raytrace_range: 0.5 footprint: [[-0.21, -0.165], [-0.21, 0.165], [0.21, 0.165], [0.21, -0.165]] footprint_padding: 0.1 plugins: - {name: sonar_layer, type: "range_sensor_layer::RangeSensorLayer"} - {name: inflater_layer, type: "costmap_2d::InflationLayer"} sonar_layer: ns: "" topics: ["/sonar"] no_readings_timeout: 1.0 clear_threshold: 0.2 mark_threshold: 0.80 clear_on_max_reading: false inflater_layer: inflation_radius: 0.3 ### local_costmap_params.yaml local_costmap: global_frame: odom robot_base_frame: base_link update_frequency: 20.0 publish_frequency: 5.0 width: 10.0 height: 10.0 resolution: 0.05 static_map: false rolling_window: true ### global_costmap_params.yaml global_costmap: global_frame: odom robot_base_frame: base_link update_frequency: 20 publish_frequency: 5 width: 40.0 height: 40.0 resolution: 0.05 origin_x: -20.0 origin_y: -20.0 static_map: true rolling_window: false In my robot URDF I include the [sonar_sensor macro](https://github.com/tu-darmstadt-ros-pkg/hector_models/blob/indigo-devel/hector_sensors_description/urdf/sonar_sensor.urdf.xacro) and instantiate my sonar sensor like so: I'm not sure what's going on here. I'd appreciate any help.

sensor plugin

$
0
0
Hello, I need to simulate sensor to detect holes depth in wall in gazebo, which senor is suitable for this task? and which plugin in used for it? Thank you,

Obstacle avoidance with navigation stack without map!

$
0
0
I've got a little robot with 4 sonar sensor, is it possible to do obstacle avoidance without have a map? I've just finish to configure navigation stack and setting up launch files, but now i'm stuck. Thank u!

What is the difference between tf_broadcaster and tf_listener?

$
0
0
I'm trying to code tf configuration in navigation stack, i'm having some problem in understanding if i need to code one tf_broadcaster and one tf_listener for each sensor or just one tf_broadcaster per sensor.
For example:
My robot has 4 sonar sensor, i'm publishing they'r values as 4 Range messages and 1 LaserScan message.
In the same node where values of sensor are taken i coded 4 tf_broadcaster per 4 Range messages, and 1 tf_broadcaster per LaserScan message.
Should i code even the tf_listener? What is the right configuration for my case? Can u help me? Thank u!

Sonar on Pioneer 3-AT

$
0
0
I have to simulate the Pioneer3-AT. Therefor I got a model in gazebo. It lookes very fine. But in the model there ist only the visual of the front and back sonar. I need them to publish the messages. So I tried to use a gazebo plugin: 0 0 0 0 0 0false40301-0.1310.1310.08100.01gaussian0.00.01/pioneer/sonar/scan2front_sonar_box2 But in use there is no topic named /pioneer/sonar/scan2 and I can't see it in Rviz. Have anybody an idea why this doesn't work? Thanks in advance!
Viewing all 53 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>