Wiki | Community | Discussion Forum | Q&A

Not able to create mask for Red Line

Hi, I tried to implement the code to detect the red line that the car has to follow. I am using the cv2.inRange function to create a mask for the robot.
I added a few lights in the gazebo but the environment. But not able to create a mask out of it.
Any help.

Hi @sachinkum009,

what color space are you using? RGB? It is better to work in HSV, color detection is simpler and more robust on it. So it will be useful to do a color conversion of the image to HSV and then apply the color filter with cv2.inRange. Take a look at these two examples: (a) https://cebolladaisabel.wixsite.com/myblog/contact , here you will find right values for the cv2.inRange, and (b) https://carlos2caminero.wixsite.com/mobilerobotics/post/follow-line

Cheers,

Hi, @jmplaza,

Thank you for your response.

I am using OpenCV to convert the image to HSV and then applying the OpenCV inRange Function to create a mask. I am using the [170, 100, 100] as the lower range and [180, 255, 255] as the upper range.

I don’t know why the mask is still black.

It seems 170-180 is too narrow. Try these values [0,179] (H), [0, 255] (S), 0, 255. Isabel found them successful to detect the red line as you can see on her webpage https://cebolladaisabel.wixsite.com/myblog/contact

1 Like

@jmplaza
Thank you for your replay
But I tried these values

lower_yellow = np.array([0, 0, 0])
upper_yellow = np.array([179, 255, 255])

mask = cv2.inRange(hsv, lower_yellow, upper_yellow)

Now, the mask is white.

please guide me

Hi @sachinkum009 just test different values for those filter ranges and see their effect. Then you will eventually find good values for them. You may also check about HSV color space on internet and OpenCV. Cheers,

1 Like

@jmplaza

Thank you for your response

I find out that actually this is the range that OpenCV uses for hsv.
I will check the filter of red and see which values work.
:slight_smile:

@jmplaza
can you please check this

I used the opencv cvtColor function to convert the bgr2hsv
but these results are totally different.

The hsv image (left one) is able to detect the red line, but in the opposite, MainWindow I am using the same code. but it is created a mask of black color.

BGR is different from RBG. Read the docs, and explore yourself. You are close! :slight_smile:

@jmplaza
Thank you
I converted the image to BGR using OpenCV cvtColor function and then converted to HSV, after that applied the inRange function to create the mask and then used the moments to get the center point and calculated the error by subtracting the center point of moments to the actual center point of the image and then applied the P controller to get the turning velocity.

Cheers,

@jmplaza
I would like to ask about the PD controller.
I successfully added P controller that contains u = -Kp * error

Now, I want to implement PD controller.
That is u = -Kp * error + Kd * de

I don’t know how am I suppose to calculate the derivate of error and to which respect do I need to calculate this.

Thanks in advance

@jmplaza
I find it
de is the difference between current error and previous error

@jmplaza
Hi,
I am able to implement the P, PD controller.
But not sure about how to use the PI controller.
Any help
:slight_smile:

No Integral part in the controller is required to properly solve this exercise. A PD controller, properly tuned, may successfully solve it.

Ok, thank you
Just for curiosity, how do we suppose to use Integral part

Hi @sachinkum009,

that would be the sum of all the (signed) errors in a buffer with the last N iterations from present one. In this case could be the difference in pixels between the desired center of the red line in the image and the observed one.

It is already stated in the Theory section of that exercise’s web page: https://jderobot.github.io/RoboticsAcademy/exercises/AutonomousCars/follow_line/#pid-control Just read the docs! :slight_smile:

Cheers,