Monday, 2 May 2016

Sprint 3- Final Addition and Reflection. Luke Byrne


So here it is. The final sprint and the final reflection. The purpose of this reflection is to describe what it is that was done, how it all came together and describe what could be done in the future of the project. To start off let us look at what is was the team set out to achieve at the start of the semester.

  1. Develop a project that would, in some way, incorporate the basics of applied embedded operating systems. Ensuring that some sort of multithreading or multiprocessing is present as well as Inter-Process communications. 
  2. In choosing this project (Lampbotics), design it in some way to allows for 11 Engineers and 3 Computer Scientists to have some part operating on it.
  3. Develop Teamwork skills.

With these three simple aims the project could be started. The first sprint saw the group split into four teams that would set to work on developing the basics of how the projects mechanics and electronics would work. My group was Team Bohemian Raspbian introduced in my first team post below. For my team I worked on the Open CV and getting it to track a face in the X and Y domain. Our teamwork was successful and we were able to create a final product that was able to move based on where the face was and have sounds applied to mask the motor movements.

For the remainder of the project the teams where reorganised. Two teams were constructed: Embedded System Design and Mechanical. I was part of the ESD side and was tasked with implementation of colour detection and display. Sprint 2 covered this implementation in its first form and finished off with the combination of a number of different systems through the use of Kamil’s Threads linking my code to David’s code, this was outlined in Blog Post 6 linked below.

Tech Reflection

So sprint three. For this sprint I still had the colour detection to implement as well as the colour display. There is a bit of a reshuffle done in order to allow a fast demonstration of my code in the final demo (Seen in Team Post 3). As my code was mainly demo able on its own this meant we had to come up with a solution to have it shown with more than one thread looking to use the Camera. As there was already a demo with my colour detect thread and as the face detect thread allowed for the incorporation of many more thread and the use of the servos it was decided that my color detect would not be in the final demo. As I was not in the lads thought of different ways they could incorporate mine in the chance that Heng’s GUI was not ready on the day. David came up with an Idea with using the x, y and z data to change the colour of the NeoPixles and I got to work.

This is the link to all the final integration code that was used. To start the Colour Detect module was altered to remove the use of the HEX code translator seen in sprint two. This was done to remove repetition of code as I was going to need to use it in the Serial thread which in turn altered the Object tag that the code was to send to exclude the HEX values being sent and include two flags. The first was the “NewColor” which was a requirement for David to ensure his code was to work correctly. The second was the “Working” tag which was a requirement of mine to ensure that the serial thread for the arduino did not send code from two threads at onces.

The second alteration is seen in the Neo Pixle Serial module. This had a lot of addition to incorporate more functionality to the design. Rather than have the colour be just a single colour that the Colour Detect module last displayed or none at all David and Philip thought the use of the x, y and z parameters in the Face Detect module could bring some life to the Lamp as it moved. To achieve this, part of the Colour Detect module was incorporated here. The first step was to see which thread was running, either the Colour or Face detection module can have control of the sending of values but only one was allowed at any one time. When this lock was in place to ensure the system ran smoothly the second step was to alter the values sent by Philips Face Detect thread and convert them to the range of 0-255. Finally the last step was to convert and RGB values sent to the output function. This was done the same way as it was in the old iteration of the Colour Detect code. The RGB values were sent to the Webcolors converter that would return the closest name based on the RGB values and in turn return the HEX which sends this to the Arduino to display on the LED’s.

Project Management

As with the last sprint, both myself and David took on the leadership role of the group. As this was the last sprint there was a lot of work to do on this front and trying to organise everything together was no easy task. We started by ensuring that everyone was on track to completion. To this end I helped both Heng and Kamil in the incorporation of the Kivy GUI to the project and Ian with the Inter-Pi Communication which was vital to the project combining into one project. Towards the end of the sprint I was unable to lead the group forward due to preparation for interviews and would like to congratulate David on his excellent leadership within the group as it was left to him to do the work of two.

Demo day was where the Teamwork really came together. This was the day that all the work came together. Unfortunately the end solution could not include every aspect that was envisioned for it as not all parts could be brought in to the thread design safely without crashing the remainder of the threads. The following is a list of all the threads that could make it into the demo:

  1. Threads: This was the backbone of the project and ensures that each program runs and receives its required information.
  2. Inter-Pi Communications: This was the link between the two boards and ensured that the servos and TTOS threads had what they needed so that they would run effectively.
  3. NeoPixle Serial control: Used to send HEX values to the Arduino for the NeoPixle LED’s to display the color.
  4. TTOS: Speech software used for the Lampbot to express itself through sound when it is moving or when it has detected a face or colour.
  5. Servos: Control of the servos movement based on the Face position.
  6. Face Detection: Detects a face and outputs its x, y and z co-ordinates. 

Threads and Inter-Pi Comms where the first to be brought together for the demo. This made sure that there was at the very least communications between the threads on the ESD board and the Mechanical board. Next was the inclusion of the Face detect and Servos which brought to life the project showing the movements of the Lamp while the Face moved. Unfortunately the Mid Joint servo could not be brought in for the final demo as there was an issue with the incorporation with everything else but an individual bench test demo could be established. Final the incorporation of the NeoPixle and TTOS threads added the final bit of life to the Demo. Team Post three linked below shows the final demo and the setup held within the case of the project.

The combination of each of the systems took a level of dedication by all those involved and took the vast majority of the day to complete going late into the night. What started off as a bunch of people from three different courses ended with a team that was able to keep it together through the good and bad times of the project.

Applied Embedded Operating Systems

The main objective of this course is to incorporate some sort of RTOS into an Embedded System while improving skills such as Teamwork. The above sections described what sort of teamwork went on and how my own individual project went. However there is not much on and RTOS. In the first sprint my team incorporated an RTOS through the use of a threaded system that was designed by Kamil. His systems is vast and complicated but still incorporates the use of threaded and IPC. Hopefully this section will serve as a descriptor as to how the system has worked and how each thread can communicate with each other.


Firstly there are two parts to the Threads of the overall project, for the purpose of this description only we will call them the foreground and background. The foreground is that of the individual threads that are available while the background is Kamil’s main program ensuring the threads are all initialised and running smoothly. To start, a thread must be setup and initialised in the foreground. This requires the importing of the Library that was developed by Kamil called “DynamicObjectV2” and is setup using the following commands:

  • sys.path.append(os.path.abspath(os.path.join(os.path.dirname(__file__), os.path.pardir)))
  • Obj = DynamicObjectV2.Class

This is a requirement for the thread to works as it deals with the sending of objects and through specific path names in order for the main thread to understand what is going on. Next is the initialisation of the code. This is done in the init(self) function held within a foreground thread and is done through the following code:

self.registerOutput("colourDTC", Obj("R",0,"G",0,"B",0,"NewColor",True,"Working", False))
Kamil’s Register Output function, held within the background program, is used to register both the tag and the output of the thread you are registering. In this case the tag I am registering is “colourDTC” and the values I am initialising it with are:
  • R: 0 – Initialises the Red value to zero.
  • G: 0 – Initialises the Green value to zero.
  • B: 0 – Initialises the Blue value to zero.
  • NewColor: True – Initialises the NewColor flag to True.
  • Working: False – Initialises the Working flag to False.

The first step in the background is the inclusion of the thread into his module list. This is included as just the name without the extension in the “fromSource” section of the software in the modules folder. To explain the full amount of Kamil’s code would be lengthily enough and could never do it justice but in terms of complete understanding as to what is going on a quick bullet point list can be used:

  • Threads are imported from the module list “fromSource” section.
  • Each thread is added through the function addThreadFromSource(moduleSource, moduleName).
  • The API is run that calls the following:
    • registerOutput: With a Lock to ensure there is no repeat calls the tag is registered.
    • getInputs: With a Lock to ensure there is no repeat calls the Inputs of the tag are obtained.
    • output: With a Lock to ensure there is no repeat calls outputs whatever values are is applied to the tag (such as R : 0).
    • message: Ouputs any message sent through the self.message function explained in IPC below. Used instead of the print function to ensure system outputs are not effected.
    • Obj(copy.copy(flags)): outputs the flags of the system.
  • Each thread is started using the for loop at the end of his code.

The system incorporated ensures that all threads are initialised and can be shut down from the main program. It makes sure that each thread can display whatever its outputs are through updating its inputs and allowing each thread to obtain a copy of all the registered threads outputs which brings us nicely to IPC.


Inter-Process Communications is key for a threaded system to be run. The background code, described loosely above, imports all of the outputs that each thread supplies after initialisation. These are held and updated when each foreground thread declares a new output using the self.output("TAG",Obj("NAME”,VALUE)) function. But at this point only the background code knows of the change and does not inform the list of threads that there is a thread. It is up to the foreground thread to request them. In the NeoPixle Serial code held in my GitHub Link above there is an example of this call:

  • ColrDTC = self.getInputs().colourDTC

This function call sees the foreground thread of the NeoPixle Serial code calling the inputs of the colourDTC tag. This stores all the values that are declared within the colour detect thread in a newly appointed object known “ColrDTC” which in turn allows for each of the values in that object to be recalled individually (etc. ColrDTC.Working). This allows one thread use what is required from a different thread without a direct link to that individual thread and thus completes the IPC requirements of the RTOS.


Unfortunately not all of the parts of the project could be implemented. This however does not mean that the project was incomplete. A demo was created and can be seen in Team Post 3 in the links below. This showed the potential that this project could have and stands as a platform for those to come through the project.

My input to the demo did not show the full extent of my contribution as only one of the two modules could be linked with the rest, but previous posts show the full extent of what I have achieved. The most adventitious aspect of this project is not the finished product however. Although it is nice to see what you had set out to do working, the most valuable experience was the Teamwork that was involved. RTOS is invaluable to the project and the learning behind the threading system implemented is something that will benefit all in the future but this cannot compare to the lessons that where learned in working or managing 13 engineering and computer science student and the problems that can occur when attempting a project of this size. So what have I learned?

  1. RTOS is both vast and complicated but with dedication anything can be understood.
  2. Computer Science students write far neater code and more organised code then myself and most of the Electronic students that I know, but in saying that I am a far better coder from their example.
  3. Rather than saying “Yea but...” say “Yes and...”, as it is a far nicer way to get your point across.
  4. You can’t know everything and that’s ok, main thing is to know when you need to ask for help and to notice when someone needs help but doesn’t know when to ask.
  5. Anything can be built with time and patience.

For the future of this project I would like to see it developed to the original specs of the project. The User interface fully integrated with four options for the camera that would see the Face detection/recognition, Colour Detection and Shape Detection programs run individually as well as the incorporation of the Shape and Colour detection game that could be used for open days or demonstrations to inspire STEAM in secondary and primary schools.

Finally I would like to thank Michal, Kamil, Heng, Ian, Lee, Louis, David, Philip, Philipe, Rob, Joe, Shaun and William  for all of their contributions with a special thanks to Kamil for teaching me the proper way to code, David for his help and Leadership and Jason for running this course and allowing us free reign on the design and implementation of the project

Luke Blog Post 8.

Blog Posts

No comments:

Post a Comment