Thursday, 9 February 2017

mbed firmware proof of conect (Anton)

RTOS

The firmware is awaiting serial data in normalised float form in a range of 0.0 to 1.0. This way we don't have to care about computer vision system resolution.

Serial Interrupt is handled by ISR which creates signal. This signal is blocking a serial reader thread, after getting the signal, the thread will continue and the position data is read. Then the data is put into the pool with the queue and then read by the gimbal thread. The serial read thread is the highest priority, this way the data are ready very quickly close to ISR, but still they are not blocking whole OS as it would be if we would have processing and logic done inside the ISR. I tried to split the code into logical chunks like "serial", "Persona". The Persona class in not finished and work in progress of an idea how we could do the things.



Main source at the moment contains the gimbal thread, but later it would have to be refactored to separate file, or class


The globals.h contain the common type definitions (but at the moment it's just one).


Currently I didn't find use for mutex / semaphores and I didn't wanted to force where it didn't felt right to me. But I will use timers and tickers to tick time for the Persona state, which will change depending on the time. Timing for animations will be important as well.

Camera and lamp offset

Because camera will be at a different location it might be helpful to do some transformations first, each lamp would have different transformations applied. From simple range limit, offsets, to inverting axis (I was able to switch between tracking me in reality and tracking my mirrored image in the webcamera).

This might not be the perfect solution but might help a bit, and I hope it could get us into "good enough" category, so we wouldn't have to do proper solution.



Behaviour / Persona state machine


The lamp would act differently depending on the state and each lamp could have a different persona. Different persona would characterised by a different character modifiers, attention span could cause the lamp to go to idle when there will not be enough stimuli. Temper would allow get the annoyed etc...

As example stimuli could be total sum of distance tracked over a time of 3 (or probably more) seconds. This value would be multiplied by the attention span modifier  for one condition and annoyance modifier for different conditions. If it's under specific threshold the lamp could go into daydream state, while if it's above  annoyed threshold it would go into the annoyed state (refuse tracking, and play sad animation). The thresholds would be same for each lamp, but the modifiers would be different. This way each lamp can reach some state faster than others, we can make lamp oversensitive to some stimuli or completely ignore the stimuli. It should allow us easy tuning how quickly the lamp will get bored or annoyed.

I think this approach should allow us add new states to the state machine and their allowed transitions easily. The state machine could be hardcoded or implemented as 2D matrix.




Proposed firmware flow



Tracking will get the data from serial port.

Conditions will transform or offset the data, depending on the position between the camera and the lamp, could use filters to smooth noisy tracking as well.

These tracked movements will go into Persona where depending on the state either the data will be pass through or be modified / overwritten by animations (if the idle lamp gets startled it will first play shaking animation, then go tracking state and only the track fully the object).

Collision avoidance could be a problem, but because the lamps could interfere with each others and could damage each others. We need to check others positions before every move. We probably need CAN bus messages to broadcast our position to let others in the pack know where we are and where others are. This can be big can of worms and I wonder how much physical spacing will be needed and how much margin for errors we will have. Interesting will be solving the problem what to do when we will be changing locations, should we notify old position, new position where we are not yet and opening our current position for impact, or creating a volume of "no fly zone" between the current and new position. This will need some shortcuts to be taken (probably by spreading the lamps apart a bit. Or it could be logistical problem, affecting animations as well.

Between the Persona and Collision avoidance could be another memorry pool where we could push the whole animation at the same time. And let the queue reader to get through time at the required pace. It could be possible to allow sending with the coordinates, time and even a transition function as well.

Because servos can only move in linear all others would have to be implemented in the software good bit of processing power. It's not complex and could be stored in a lookup-tables, but it would required frequent updates. That would require lot of context switching in the OS a lot of overhead. Even if this would be very nice, I would try it only after the precision of servos will be proven as excellent and we will have a lot of spare CPU cycles.

After the movement is clipped into the "safe space" then the signals can be sent to servos.

Interesting twist would be an option to dismiss all tracking and persona and override data with RC transmitter for manual  human control.

Possible problem with servos

I'm bit worried if the servo approach will be good enough and resolution precise enough to be smooth, even small angle when is multiplied by long arms can result shaky movements. Wondering if stepper motors with gears would be worth a try. 

More information

For more details read the previous blog posts about the hardware and the python tracking. Sadly I wasted lot of my free time on the object tracking and in the end didn't spend with C++ and RTOS as much time as I wanted, so lot of code is just a placeholder with no proper implementation.

I omitted the joystick feature because when I was making PCB i got some ugly looking signals on my oscilloscope and worried it could hurt the board I disconected the motherboard, so I the whole developmnet was done on the vanilla mbed. More details are in the hardware blog post.

Results

Here is playlist of 10 short youtube videos showing the proof of concept in action


About Me

I'm Anton Krug, currently I'm in the fourth year of Applied Computing. Electronic Engineering is my hobby for a decade, I love pushing boundaries what can be done in-house (for example in terms of hand made PCBs), love pushing hardware a bit more than it desires. Always loved the feeling when I managed to squeeze the last bits of spare resources with hand-crafted assembler. While software can't be touched, embedded project often interact with the real world. From thoughts you can create and interact with real matter. Electricity got my interest from very early age, when as a toddler I connected wires to a DC 5V lamp and then plugged it directly into mains. Loud sound effects and whole house in pitch black.... Electricity made a strong impression on me.

What I want to be in future? I had the scrap my plan A of the world domination when my beloved drone Louise cut me on my hands, forearms, shoulders and back. Louise how could you?! You broke my heart! After that plan for autonomous drone army was cancelled, so now regular robot army will have to do.

My motto summarised from last few years could be:

When you are turning on your contraption and you are not even slightly scared, then you are not doing it right.


Now seriously I would like to continue with the masters if the job will allow me to. Was tempted with PhD for while but bit scared to do such big leap. I enjoyed tutoring at lot when I was tutoring my classmates or when I was voluntering for ICT course over past 3 years. Staying in academia could be lovely, have lot of ideas which I would love to try.. Other interest were to continue developing my RC related tools, OSDs and found market gap (there is no affordable and decent multi-rotor focused flying simulator). The simulator could useful framework for autonomous drones development.

Last year I fell in love with NodeJS and this year with TypeScript, doing backend server development is very fulfilling as well. I can do full stack development, but I prefer staying in the backend.

Because I'm excited about very broad range of fields, I would mind end in completely different place than I planned and therefore I do not do huge plans where I have to end up. If there will be open door with some opportunity, I will not ignore it and miss a chance just because I didn't plan it 5 years ago.

What I'm hoping to work on this project is for sure C++, I used to know C / C++ well, but after getting to college we worked a lot in Java, my skills got rusty and I miss my C. Too bad that we are not doing much more C++ coding in college. I know Java is more user-friendly and easier to learn, but I just don't want to forget how to use destructors, freeing memory and all the things Java developers don't have to think about..

No comments:

Post a Comment