Thursday, December 8, 2011

Vision system for Robot

It has been a while since I posted a status on the robot. I discovered very quickly that using just the ultrasonic ping modules would not be a very good idea – they may be good for detecting long solid objects but when it comes to cubicle walls, that are lined with sound absorbing fabric, they sort of fail. The sensors are also placed at 30 degree intervals so it was very easy to hit a table leg or a box while trying to avoid a bigger object and get stalled. As the robot also had casters in front and the back it was easy to get straddled on objects lying on the ground.

  
I switched focus and have been busy with the vision portion of the project for a few months. The vision portion consists of navigation and object detection/identification. In this post I discuss the navigation portion.

I have been exploring different technologies for depth detection for navigation. I tried the Microsoft Kinect adapter, which provides a decent depth camera and sells for about $150 – though a couple of places had it on special for $99 on Black Friday.
A number of projects are using the Kinect, and it is used in the reference platform design for the Microsoft Robotics Studio. The Kinect uses technology created by PrimeSense basically it consists of an Infra-Red laser that projects a pattern of dots on a scene – the PrimeSense system then uses an IR camera to calculate the distance. The PrimeSense processor calculates the time it takes for light to reach a point and then reflect back to the camera. This is called "time of flight" and works similar to sonar. The accuracy of this camera is about 1 cm. As the camera uses a  pattern of dots it does not provide a lot of acuracy as to the edges of objects either. If you look at the depth image of a static scene you will notice that the depth image is extremely noisy i.e. edges move.

As the first version of the Kinect SDK only had 32 bit libraries (the November release now supports 64 bit development) and has a non-commercial use license I decided to look elsewhere. Asus also released a camera called the Xtion which uses the same base technology from PrimeSense; it costs about $50-$70 more. The Xtion uses the OpenNi open source libraries which would be perfect for basic navigation.

However as neither of the depth cameras have a great acuracy they cannot be used for more detailed analysis required for the object detection portion of my project. I will go into using stereo cameras and scanning lasers in my next post.

Tuesday, August 30, 2011

Building a telepresence Robot - part 3

My last post left off where I was playing around with the Parallax Propeller micro controller. There are two official ways to program the controller these are either via SPIN or via assembler and a number of unofficial unsupported ways. I played a bit with both and as I am more comfortable with C, I decided to use the Catalina C compiler package which includes the Code::Blocks editor. Unfortunately there is no easy way to use SPIN objects obtained from the Object Exchange with C so I had to rewrite the Object I downloaded to do the Wheel Control in C. The propeller chip has 8 cores, cogs as they are officially named, this is great as one can have an isolated routine run on each cog.



As you can see in the photo I have installed some of the ultrasonic pings, on the front and a single one on the back. I still have a few more but I will install them when I need them. The base kit came with an additional mounting board for a battery but as it had holes that matched the proto board I mounted the proto board on it and the board over the motors. I had to make a quick trip to Home Depot to get a couple of 2.5" stainless bolts and a few nuts.
When I ordered the proto board I got also ordered the adapter kit to allow me to connect a VGA monitor to it. This makes debugging a lot easier as you don't need to fire up a serial terminal every time to see what is going on.
Using each component is pretty simple - you compile your program on a PC and you download it to the controller board via a USB cable. The pings are also simple to use - you signal the pin that it is attached to and count the number of processor ticks it takes to get a response. This gives the distance to an object in clock cycles which can be converted to time and distance. The motors are a little more complex - therefore I just downloaded a routine to manage them. You pass a distance to the motor controller and it rotates the wheels until you cover the distance.
The next step in this project will be to use the information I am getting from the pings and have the robot guide itself through the office without hitting anything. I will cover this in the next post on this blog.

Tuesday, August 16, 2011

Building a telepresence Robot - part 2

Well got of to a good start - had to find an air pump to inflate the wheels to 36psi. One of the screws on one of the motor assemblies needed a washer else the bearing would take too much strain. The overall quality of the parts is exceptional - every part is precission machined out of aluminum and every hole aligns correctly. This makes assembly a walk in the park.


Now that I have the major hardware installed I am going to play around with the micro controller and see how much code I need to write to control the servo's and read information from the ultrasonic pings.

Building a telepresence Robot - part 1

After recent reading articles in the IEEE spectrum and having experience with OpenCV and a background in Electronic Engineering I decided it would be a fun exercise to see if I could hack together a robot proxy using off the shelf components and Open Source software.

I found a great starting point at Parallax Inc which provides a couple of embedded processor boards and a really sturdy base. For power I decided to use a 12v lead acid UPS battery, I have lying around, for the motors together with a universal BEC which would give me 6 volts for the servos and microprocessor board.


I ordered and received the parts, now the fun really begins - follow this blog to keep track of how the project progresses.

Friday, January 7, 2011

Converting from StarTeam to Subversion - Part 2

After spending another month on this I finally managed to get a reasonable import for each project. Links were tricky and I ended up just saving the link location to a file so that I could go back and manually add them to SVN when I was done.
Starteam allowed multiple files with the same name which caused a lot of issues. I also discovered that revision and version labels were handled differently and a revision label could contain files that were dated after it's creating. Doing a difference between a revision and version label would often end up causing a huge mess.
My final working code only used the difference between revision labels to get history. I processed every revision in every branch completely creating tags for every version label as I hit the time stamp for that label. This made it easier to create branches from the tags than try and determine which version of which file should be included.

I hope I never have to do this type of conversion again - a simple task (or so I thought) of converting 15+ years of starteam projects to SVN took me about 3 months of development time which far exceeded the 2 weeks I had set asside for the project.

Our largest project was just not reliably convertable, for this I just took a snapshot of each branch and commited that to SVN - if somone wanted more history they could get it from Starteam :).

Wednesday, October 27, 2010

Converting from StarTeam to Subversion - Part 1

Being a long time user of StarTeam (since 2000) we have been reasonable content with our StarTeam installation. We do however use a number of different operating systems and with Borland no longer giving appropriate support to OSx and the fact that the integration with Visual Studio is extremely bulky and only available on the top tier of the product  I decided it was time to switch to something new. A number of our developers had used Subversion in the past so this was the obvious choice - the repository would however need to be converted. Having gone through the conversion from MKS to starteam in 2000 I thought I would take responsibility for the StarTeam to Subversion conversion.

I found a utility written in Java called svnimport from Polarion and it did a reasonable job until it hit our main project and kept throwing out of heap exceptions. I tried all the Java tricks of setting a large heap size and aggressive heap usage but it would always die at about 1.3GB of allocated memory (I had 12 GB so I think the issue was the way the utility was written). After a bit of searching I found the source code for svnimport. I downloaded a copy of myeclipse and started debugging svnimport to see if I could possibly fix the issues or change the way it was using memory. I am however not a big fan of Java so I decided to convert the project to C# and use the Borland .Net libraries.

The first issue I discovered with svnimport was that it did not do its branching or tagging intelligently if you had 10 branches that were created at different times there would be 10 copies of everything up to the branch time. this meant that if you had a 1MB file with 10 revisions in your main trunk it would be present 10 times in every branch. This design caused one project to take 18 hours to dump and 4 days to import creating a 24GB subversion repository

The first thing I changed was to build up a list of branches and tags sorted by creation date (you need to check the attributes and see if the branch is based on a label and then get the appropriate date from the label item).

Svnimport builds up a list of actions/commits based on the revision of each file, I changed this to only process files that had a modified time > than the creation time of the branch - this cut down on the issue above of having duplicate files. now with these 2 changes I went about updating the dump file creation. The commits are sorted by date so all I had to do was insert code that would check to see if a commit date was larger than a branch or label and if so I would inset an action to create a branch/label that was copied from the current revision. Tags were easy as they are just a snapshot in time.

I did a couple of test runs - one project that was generating a 18MB dump file was now generating a 3MB dump, and the import into subversion was solid.

I could now start working on getting a more reliable conversion going.

continued in part 2...