The objective of sprint 2 was to provide the lampbot with a proof of concept vision system to track balls and faces. Each team was to add in vision processing to their robot design. The vision system originally recommended was roborealm.
There were 5 main deliverables outlined in the brief.
1. mbed running minimum of 4 threads and using IPC.
2. Cone part of lamp design tracking ball and faces in 2d space...think pan/tilt
3. Roborealm running on pc communicating with mbed.
4. Combine cone pan/tilt movement with sonar in/out movement of body.
5. Client want so see video plus demonstration of functionality at the end of sprint.
At the beginning of the sprint, we took the lessons learned from sprint 1 on board and designated tasks, instead of every team member doing the same thing. This proved to be a much better way of working as we made progress far earlier than in sprint 1. I was tasked with getting roborealm set up and have it communicating with the mbed. This proved a challenge at the beginning as I could only get the LCD to display one character at a time but with the help of Ali and Andras, this problem was swiftly eradicated. It was decided by the group that Open-CV running with Python might be a more viable option so Andras began working on that. As I had never used Open-CV before and had little knowledge of Python, it was beneficial to have one of the team working in that area of the project as we share all the code amongst the team and I could ask questions about the software at any time. Although progress was being made with Open-CV, I still felt it was important to continue working with roborealm so we could have 2 options for vision systems in case it turned out that Open-CV was not suitable. My main reflection on this sprint is that I feel we worked much better as a team, communicated better and saw results from our teamwork a lot sooner than in sprint 1.