Since the last sprint, there has been more cross team collaboration than occurred during the first sprint. Different teams shared knowledge when they achieved tasks ahead of other teams and ideas were shared between teams where needed. Even limited to just the examples that I was personally involved in, I had provided assistance to Sean with Python code while our team made use of Cathal's lamp prototype and integrated it with our pan & tilt control. For another example, as our team had a greater level of success in getting the serial communication going between the mbed and the PC, we shared our code to read data over the serial connection via mbed.
Within our team, we initially had much greater success dividing tasks out between team members to ensure that different team members had things to do. For example, during week 1 of this sprint we had separate team members working on IPC, Roborealm, serial communication and assembling the servos for our prototype.
However, despite that we still some difficulty finding work for all team members. Once the early tasks ceased to be separate and the tasks required converged to making it all work together, the difficulty in involving all team members in the work returned. There were a few causes to this that could be identified:
- There were not enough resources to duplicate our setup so that each team member could work on their own tasks. One example of where this caused problems was when Sam was working on sending an output to the third servo while I was working on IPC. We both needed an mbed to perform our tasks, but there was only one available to the group that day. As such, I could not do more than the initial implementation of IPC, as I had no test platform to actually run my code on. Had Sam given me the mbed, he would have been unable to proceed any further on his task also.
- There was a difference in skills between team members. The team was composed of 50% electronics students and 50% Applied Computing students, but it became clear during the second week that the tasks for this sprint were in fact much more heavily weighted towards the programming side of the spectrum. As myself and Sam were more experienced in programming and completed tasks quickly, this meant that there may not have been enough work for Dii and Donal.
On a technical level, a few issues became clear during this sprint. A major issue that is still an issue is the unreliability of the sonar sensors. At this stage, there are very few sonar sensors remaining, as sonar sensors seem to fail randomly. For example, near the end of the sprint we had finished our prototype, and put it away working fine, then at the demo the next day the Sonar sensor had failed and was giving random values, causing us to have to fall back to a video recorded the next day to demo our work integrating it with the rest of the system.
Even aside from the failure rate, the sonar sensors also produce a very noisy result as to the distance for the detected object. The value provided for a stationary object can vary by as much as 10% of the Sonar's range with a short time period, meaning that any device reading values from the Sonar is either going to have to average or ignore many values or it will produce a very twitchy positioning.
One option to deal with this would be to try alternative manufacturers for the Sonar sensors, but other members of the larger team have pointed out that both Roborealm and OpenCV can both give an estimate as to the distance of a detected object based on its size in an image. Because of this, my preferred approach would be to remove the Sonar sensor from the design and replace it with image-based distance detection if it proves more reliable than the sonar sensor.