Monday, 15 February 2016

Sprint 1 reflection Michal



Team:   QRM 
Author: Michal Ogrodniczak 
Date:    15/02/2016


Technical Reflection

Sprint 1 was successful for QRM team as we have managed to complete all tasks that we planned out in week 1 of this sprint, during the development process we have made couple minor design changes:
  • audio no longer had its own thread - it was simply an unnecessary overhead to have it as a thread since audio calls are non blocking and sound can be started/stopped along with volume control from the servo thread. This decision made the code easier and eliminated context switching overheads. 
  • initially we have intended to use 0-100 range to describe position of the detected face in relation to the centre of the frame(50,50) and apply this logic to the servos to represent 0 as minimum position and 100 as maximum. We have decided to change the ranges to 0-1 as floats as this made it much easier mathematically to compute properly. I think that this decision to unify units between vision and servos contributed greatly to the success of the project as we did not have to worry about translating position of the face to servo angles like other teams did, personally i would not have an idea where to start with that approach. 
  • besides mutex (Python threading.Lock()) we have also used Events ((Python threading.Event())) to synchronise vision and servo threads - this allowed us to only move servos when vision has updated face position in the system.
Issues we have hit on the way to get to the demo:
  • Two camera modules have failed on us and we had to develop version of the system that works of the pre recorded video rather than camera but this meant that when the servos moved it had no effect on the face position in the video. 
  • I have stripped plastic gears on one of the servos trying to see how strongly it holds a position, but by going through it we now have significant suspicions that these servos might not be able to work effectively in the lamp and stronger motors might need to be considered. 
  • Our system’s performance greatly depended on its image processing power, we have used formula: number of processed frames / total seconds to run to calculate effective FPS and we have found that it is highly dependent on the capture resolution - which spawned a trade-off situation - high resolution was able to detect faces better, low resolution was faster to process and system was more responsive. More detail and possible solutions below.

Code used in class demo is available on:
https://github.com/witmicko/LamPy


Demo video:


Project management reflection

Rob and Qichao say I’m the leader in our team, but I don't think of myself as one, in fact I don’t think we need one at the moment - we come up with ideas together and teammates can critique/contribute to the proposed solutions. On our meetings during the class hours we mostly plan for future and report any issues that arise and trying to solve those, we also spend some time experimenting to gain deeper understanding of how servos or vision system work. My last post is based on those activities and it was an effort of the whole team. In a nutshell our management works in a following way: we come up with list of things that need to be done and ask if anyone is interested in doing something specific, we were lucky that each of us was interested in different aspect of the system and we weren’t fighting over tasks to do.
We don't have problem with absences - this is fairly natural that someone cannot show up for some reason but we always inform our teammates beforehand and person absent is kept up to date through our google drive.

Teamwork reflection

At the start we didn't know each other but we all knew that there is no time to waste so we set out to tackle easy points first - team photo, name, lamp name. Everyone had a gmail account so I suggested Google Drive as a collaboration space for posts templates,source code and few documents for open discussion etc. Project was entirely group effort as in class rather than go off to do our tasks we were discussing and experimenting with solutions to problems someone had at the time - good ideas were either implemented or noted down for future reference (servo hat). 
Most importantly we were able to learn from each other - Rob discovered that we can code remotely on the Rpi through PyCharm with remote interpreter - this was major boost to the productivity and debugging process, even though it doesn't work for him due to the network setup he has at home. I showed Rob and Qichao how to use putty/kitty for ssh access and fileZilla for file transfer.

Personal reflection

This module was an elective for computing people and presumably we could chose module that would be easier - but I don't think that any of other electives would teach us as much while having fun building things. Initial response to the sprint 1 goals was - ‘wat’, I think we all were kind of overwhelmed by what we needed to build, but biting of a piece at a time we were able to dissect system into logical portions and each was achievable.

At the moment I think that there is a lot of reinventing a wheel going around in teams, every team had jittery servos problem but we did not try to solve it on class level. My proposal for this is:
  • unified system metrics - e.g. fps - we would be able to directly compare performance between the teams and select the best one for the final sprint.
  • direct some time towards whole class discussion: 
    • if team A has an issue, one person might get up and talk about it with the rest of the class. 
    • if someone discovered something - he might get up and share with the rest of the class.


Next sprint

I think that for best collaboration teams should remain as they are but inter team cooperation should be introduced. I also think that if the final goal is the lamp, we should get them for the next sprint, there will be issues we ll come across and it will take time to solve them, perfecting current systems would be nice but they are too different:
  • hinges/servos will not be positioned as they are now 
  • not sure of servo behaviour when working under weight load. 
  • camera will be on off set in relation to the servos meaning that it will be harder to correlate face position with servo movement. 

List of things to investigate to improve existing systems:

  • Processes - most teams were only able to push Rpi to 50-60% of CPU usage, having separate processes might help to utilise it better. 
  • Opencv bottleneck - investigate if it is face detection and/or camera IO that is casusing most of the delay. 
  • Rpi servo hat / arduino - whichever we can get to work easier but hardware DMA pwm is needed to drive servos in a smooth and controlled fashion. 
  • Rpi networking - we could leverage two Rpi’s and two cameras, possible applications are:
    • use camera in a similar fashion as last year’s project, use 2nd camera on the lamp head to follow detected face with greater accuracy. 
    • double the processing power.

No comments:

Post a Comment