Friday, 18 March 2016

Lampbotics Project Proof Of Concept Colour Detection with Audio Output including threads

Team:    ESD 
Authors:  David O Mahony, Luke Byrne ,Kamil Mech
Date:      18/03/2016


this test blog  demonstrates the efforts of three team members Luke, David and Kamil.  the video demonstrates the current working condition of their combined efforts. The set up sees the color in front of the camera the system the changes the colors of the light to mach and says the color that its sees or the closest representation of that color it knows. the intercommunication is all controlled through the master thread.
  • luke - color detection and light control 
  • David text to speech control 
  • Kamil master thread control
Test video:

Aspects on show

  • colour detect and lighting
This program sees the use of the OpenCV software in another way to the standard face detection method. As stated in a previous blog entry there are two threads involved in displaying these colours. the first is the colorDTC thread. this detects the colour and outputs the value. from the last entry this has been updated in a minor way. As well as sending RGB values it send Hex codes also. This is done to improve the colour that is sent. David found code to use with his text to speech that allows the lamp to read out the colour that is detected or at the very least the closest match. Using this code a colour name is developed that is then converted to Hex to pass to the NeoPixles as they can output the Hex value exactly. This method is preferred as it obtains the closest value needed to output which seems to be more accurate that sending the raw values. The NeoPixle thread has not seen any changes. All of this code can be found here(

One thing I have learned from this sprint is that the surroundings of the environment sees the values change quite dramatically. The ambient lighting has an effect that needs to be overcome. for the future I believe there are two options to overcome this. Option 1- at the moment the colours are detected automatically. It is possible to narrow what colours are detected to a hand full and altering the ranges that detect that colour manually. By this I mean should someone stand in front of the image with a light green that seems more yellow then a golden yellow will be displayed. Option 2- This option sees the code staying as is but finding out a way to combat the issue of ambient lighting altering the results.
I would like to thank David for his help with the building of the lampbots head as well as Kamil with for his help with understanding the more in depth parts of the threads.


  • Text to speech control 
The text to speech on show takes the RGB values sent out from the color detector and converts them to the closest representation of the color in text. such as (0,0,0) would change to "black" or (200,200,1) would change to goldenrod. one issue found when combining with the threads was the fact the general code used to speak was using a sys.arg as part of a debug, this had negative effects as the master thread does debugging and thus this conflicted. once it was removed the code worked as intended and improved further to allow for clearer speech. the code has the ability to say shapes and general conversation texts. due to time restraints these were not shown but will be developed further in the next sprint. there is some latency in what the robot says and sees, this will also be fixed in the next sprint by only speaking one it is told to do stuff thus not being held up. would like to thank my fellow team mates luke and kamil for their invaluable help in getting the setup to work with the threads and color detect.

  • master thread control 
As planned, TVCS v0.1.5 was used to combine the parts. NeoPixle, ColorDTC, SoundEffects and TTOS modules were successfully assimilated. Sound Effects was based of previous sprint solution, so it had to use the built-in legacy support (fromClass). This saved us time, as code didn't need to be reworked with the new model. Wiring up was not difficult, as all modules involved came prepared with logic and expectations regarding message input and output. A few fixes were required on both sides. Combining took us a bit longer than expected, but I think it paid off. All four modules successfully communicate and as a result system manages to detect colors, make sound and speak.


build pictures:

David O'Mahony entry 7,
Luke Byrne entry 6,
Kamil Mech entry 8,

No comments:

Post a Comment