The project itself was
completed in time for the demonstration. Each of the parts worked individually
under test conditions. A debug condition was written into the threads in order
to visually see the various threads in operation. This allowed for a rough indication
of what was going on and showed that the codes where working together, however
on demo day when everything was connected together the servo motors experienced
a jitter like effect and where erratic. It is uncertain how this occurred as of
yet but is definitely something that would need to be solved before the final
iteration of the project.
From looking over the previous design of the Lampbotics project I think a number of improvements could be implemented. These are as follows:
- One of the main issues of the previous iteration was that it required a laptop to be connected to the web cam. This issue is essentially solved with implementing the design with the Raspberry Pi 2 Board. This opens up a world of different possibility’s but has one major issue to follow. Despite the RPi2 having a 900MHz quad core processor, the usage of this processor is nearly over taxed resulting in a slow response of the system. If the project would like to have a faster response a solution to this issue must be implemented. One such solution could be the creation of a network of two RPi2 boards, one that operated the camera operations and one that manages all the other functions. This would drastically reduce the CPU usage and allow for greater response times of the servo motors.
- Another improvement to the design would have to be its mobility. A lamp should not be stationary and should be able to be placed on different tables. I think with the RPi2 the mobility issue can be solved however one main issue still remains. The use of a servo under the table to control the up and down motion of the Lamp would need to be changed. There are a number was ways this can be achieved. If the base is very large than the motor can be stored in there. However I think a more appropriate approach is to compromise the look of the Lamp. Any of the robotic lamps I have seen online contain a servo motor on the mid joint. If we are willing to alter the look I believe this could be a solution. Another solution could be the use of a linear actuator on the joint that can push the arm down.
- In order to control the servos better and without jitter I believe a servo driver like the Adafruit 16 servo HAT could be used. This board connects with the RPi2 directly and takes in commands, the board itself manages the servos rather than the Pi do all of the work in creating the signals. This board is designed to do that work and could see an improvement in the controls.
- Finally, as I stated in my first blog, the lamp needs to have the basic functionality of a lamp. NeoPixles can be implemented in a number of different was to achieve both the functionality of a light source and colour changing for mood descriptors or ever the basic pixelated face.
These implementations can all be decided upon by the individual teams but one thing is certain. Lamps need to be bought so that the designs can be implemented sooner rather than later.
After week one the team was divided into the different tasks. I had vision systems implementing the OpenCV face detection, Kamil had Threads, Shaun had servos and Heng had the sound implementation. Although we got on in the group there was not as much communication as was needed. The team largely kept to doing their own thing but help was given to those who needed it. I definitely think the team could benefit from more communication but with this said the first sprint was completed as a proof of concept.
There was a point suggested that would see the teams broken up. I myself think this would be a problem. The teams that were made have managed to get to a stage that they are communicating, even if that was a slow process, and it would be a setback to break this up. However I do think that inter team relations can also be improved. At the end of the year the object is to have a working Lampbotics project. I think to achieve this; a full collaboration could be had. Teams are divided internally to split up the different tasks for instance I was given OpenCV. I believe it would be beneficial for all those that are working on the OpenCV to sit down and talk about OpenCV and brainstorm this aspect of the code. This will improve communication as well as increase a knowledge pool that everyone can use.
Looking over the code I can say with certainty that in order to progress with the visions system a proper understanding of the Haar Cascade implementation file and software is needed. At the moment the code can only detect a face that looks straight at it. A side profile implementation, if it exists, could be used to keep track of a face at greater distances and at various angles. I think for my part in the team I learned a lot. Communication is something that I believe is needed but was unaware at how valuable it is in a team environment. I believe the experiences gained from this sprint alone can propel the teams forward.
This marks the first time that I have worked as part of a team that is a mix of different approaches. Although through the years of college I have been part of one team or another they have all been members of my own course and think similarly to me. It is definitely a valuable experience to view different approaches to an issue. From this sprint I have seen my own limitations and how to I can proceed past them and improve my abilities.
Luke, 3rd blog entry
Luke, 3rd blog entry