Monday, 4 April 2016

Sprint 2 Reflection, Kamil Mech

Apologies for late post.

Technical Reflection
Testing framework has been successfully implemented. It has been designed to work via `-test` flag, which takes in arguments. These arguments are names of configurations in ModuleList.py

In Terminal
```
-test Color-Detect Servo-1
```

In ModuleList.py
```
tests = Obj({
  "Color-Detect": ["Color-Detect"],
  "Servo-1": ["Servo-1"],
  "Sound": ["Sound", "Voice"],
  "GUI-Item-Detect": ["GUI", "Item-Detect"],
  "GUI-Face-Detect": ["GUI", "Face-Detect"],
  "GUI-Color-Detect": ["GUI", "Color-Detect"]
})
```

Framework iterates over the names and finds corresponding configuration in ModuleList.py. Each config contains a list of modules to load. Configuration name is important, as its prepended with `Test-` to look for test module.

For example, when `python main.py -test Color-Detect` is ran, framework needs to load modules named `Color-Detect.py` and `Test-Color-Detect.py`. These modules then are internally prepared to perform the testing. If module needs to make some internal changes in testing environment, it has access to `self.flags` variable, which may contain `self.flags.test` if such flag was specified. `self.flags` holds account of all flags and its arguments, which allows for dynamic and super-fast support of new flags with just one line of code.

Following was the plan regarding individual testcases:
1) Sound and Voice - Input/Output comparison
2) Face, Shape and Color Detect, Face Recognition - Input/Output comparison with slight adjustment. Each module needs to read an image from tester, not the camera if -test is specified.
3) GUI - Input/Output comparison with slight adjustment. Module exposes references to Kivy buttons it used, so click events could be triggered by the tester.
4) Servos - Input/Output comparison. Also, manual testing by observation when tester feeds a sequence of inbstructions into the servo.
5) Inter-Pi Comms - This one requires cooperation of two smart-tester instances, but is doable.

These were the plans regarding the individual test cases. Unfortunately, implementations of these features arrived late to me and I was unable to produce test cases for those before sprint has ended.

---

At the end of the sprint I took part in task of combining all the modules. Unfortunately, we only managed to combine four modules on the day. I think we should have assigned more time for combination. It was not difficult, but turned out more time consuming than expected.

Team Reflection
Everybody in the team was very busy, yet we always found time to cooperate. In my particular case, I needed to contact everyone on day-by-day basis to see where they are in terms of implementation. I needed to do this to find out if feature is ready to test. Unfortunately, it turned out plenty of features were not test-ready until the very end of the sprint. Taking this into account, I have spent my time polishing the testing framework. A number of team members requested consultation with me regarding TVCS and threading and I was happy to aid them.

Next Sprint
First of all, I would like to implement the test cases as designed.
On the combination day I had an interesting chat with Michal, which assured me that TVCS needs to be upgraded. Processes have obvious advantage over Threads and it seems possible to implement this change seamlessy (i.e. so that the way TVCS is used does not change). This would lead to TVCS v0.2.0 (see Semantic Versioning).
Other than that, we have plenty of work regarding desired ESD Architecture (see diagram)

Kamil Mech, post #8

No comments:

Post a Comment