587,468 active members*
3,434 visitors online*
Register for free
Login
IndustryArena Forum > Hobby Projects > RC Robotics and Autonomous Robots > Eclipze's SMD Pick'n'Place Build....
Page 36 of 37 2634353637
Results 701 to 720 of 733
  1. #701
    Join Date
    Oct 2011
    Posts
    141
    Quote Originally Posted by guru_florida View Post
    I'm not exactly sure what he is using, but that right B&W image is a high-pass version of the left one, high-pass gives basically gives you an edge detection image. I think I have seen this before, and after the high pass, he then calculates the line slope of each pixel. I.e. find the lightest pixels before, and the ones after, calculate a slope, and output. When the part is aligned the slopes should be 0 or infinity. (right angles) generally.

    .....

    That's making sense. The video shows the high-pass using something like a Canny edge detector. He mentions that he's adding up pixels in the video and the resultant is the bottom graphs with peaks and valleys. I think this might be faster than detecting the lines that form the part edge using something like a Hough transform and then calculating slope:

    Hough

    However, Nisma stated this method took a few 100 ms but the histogram method took tens of milliseconds (whatever that algorithm is?).

    I'm kind of lazy and hoping someone can just contribute some code snippets since I'm new to vision. It's fun stuff but someone could figure this out 10 times faster than me.

    Thanks for the response. Sounds like you use a real PNP so I may ask you about the interface software as I develop one in Visual C#.

    Right now I'll probably just focus on hardware provisions with a top and bottom camera so I can continue writing a generic interface using Visual C# (which I'm also learning). The code will be open source so experts out there can easily contribute. Here's a vid of what I have so far:

    00040 - YouTube

  2. #702
    I started modifying my CNC machine to do PnP, but I just ran out of time, bit the bullet and bought a used machine. It's 10 years old but the software is quite good, I cant complain, just not documented well. I have to admit, PnP even when you buy a machine is tough. Getting the alignment and everything tweaked takes skill and experience. I've had mine for a year and I am still learning stuff each time and yet the SMT guy I bought it from was amazed that I was doing as well after I had it for a week. Must have been the CNC and programming experience that gave me an edge. However, if you have the time to dedicate to DIY PNP I highly recommend it - the experience is worth it.

    You are right on to concentrate on the vision part, I think that's the deal maker or breaker. Your machine looks great! Where did you get that?

    I love programming in C#, great language. I have some libraries I've done that may be some help. You might look into a potrace algorithm for the image processing too. I have an example in my code, but there is other c# potrace out there too. Send me a pm, and we can exchange code or setup a svn project.

    C

    My machine in action is here, you can see the laser aligners with the CyberOptics logo:
    [ame=http://www.youtube.com/watch?v=PcvxJ4AB2I4&list=UU5s44M5Zvvgv-xbhss7GsSQ&index=4&feature=plcp]Samsung CP20 Pick and Place (Chip Mounter) - YouTube[/ame]

    There is also a video of my DIY PNP on my channel.


    Quote Originally Posted by ImagineRobots View Post
    That's making sense. The video shows the high-pass using something like a Canny edge detector. He mentions that he's adding up pixels in the video and the resultant is the bottom graphs with peaks and valleys. I think this might be faster than detecting the lines that form the part edge using something like a Hough transform and then calculating slope:

    Hough

    However, Nisma stated this method took a few 100 ms but the histogram method took tens of milliseconds (whatever that algorithm is?).

    I'm kind of lazy and hoping someone can just contribute some code snippets since I'm new to vision. It's fun stuff but someone could figure this out 10 times faster than me.

    Thanks for the response. Sounds like you use a real PNP so I may ask you about the interface software as I develop one in Visual C#.

    Right now I'll probably just focus on hardware provisions with a top and bottom camera so I can continue writing a generic interface using Visual C# (which I'm also learning). The code will be open source so experts out there can easily contribute. Here's a vid of what I have so far:

    00040 - YouTube

  3. #703
    One more thing to note, you should put some appropriate filters and lighting on the camera. Both my down-facing and up-facing camera only pick up shiny gold/copper/tin pins and non-masked pads. Even the white solder mask shows up as almost black (dark gray). No shadows show up because everything else is near black anyway.

    Also, my up facing camera detects only the pins, not the part body.

    Good filters should make the image processing much simpler. Shadows are real bad for image detection.

  4. #704
    Join Date
    Oct 2011
    Posts
    141
    Quote Originally Posted by guru_florida View Post
    I started modifying my CNC machine to do PnP, but I just ran out of time, bit the bullet and bought a used machine. ....

    You are right on to concentrate on the vision part, I think that's the deal maker or breaker. Your machine looks great! Where did you get that?

    ......
    My machine in action is here, you can see the laser aligners with the CyberOptics logo:
    That's a really nice machine you have. I'm trying to build a low cost machine so feeders are out for now until someone comes up with a cheapo version.

    I'm actually running out of time and will shift focus away from vision. I think I can do blind placement of down to 0603 parts and then use human assistance to help place the TQFP chips. Just want to make sure I write the code so others can add vision algorithms easily.

    I used to code in VB6 and Visual C# is like VB but with C like syntax. I plan to set up code on GitHub once I finish the prelim interface in about 2 months. This work is very similar to the OpenPNP project which is coded in JAVA.

    My machine was built from scratch using extrusions, Delrin and ABS machine on my CNCed X3 Mill. I'm so glad I made a power draw bar for it:

    [ame=http://www.youtube.com/watch?v=tbUNKx5wrmw&list=UUPw_8uMdPFrSkW8e4MuBlZQ& index=2&feature=plcp]Movie_0004.wmv - YouTube[/ame]

  5. #705
    Join Date
    Oct 2011
    Posts
    141
    Quote Originally Posted by guru_florida View Post
    One more thing to note, you should put some appropriate filters and lighting on the camera. Both my down-facing and up-facing camera only pick up shiny gold/copper/tin pins and non-masked pads. Even the white solder mask shows up as almost black (dark gray). No shadows show up because everything else is near black anyway.

    Also, my up facing camera detects only the pins, not the part body.

    Good filters should make the image processing much simpler. Shadows are real bad for image detection.
    I was thinking the same thing. Vision is tricky but once people figure it out and share information (by contributing to open source code & like forums) then folks won't have to reinvent the wheel. I predict a PNP wave just like the current 3d printer wave. I'll be lame and say the democratization of PNP is not far off....

  6. #706
    Join Date
    Sep 2006
    Posts
    70
    It´s less the problem of sharing the code, the "magic" function inside OpenCV is CVReduce. It´s compute in less then one ms, but the previous needed image filtering/thresholding usually takes 8 ms .
    If you want a small example of horizontal/vertical histogram , image processing - word segmentation using opencv - Stack Overflow
    The real problem is the setup and prefiltering. Or you have experience in it, or it´s a pain in the ass. Mostly, the key in this type of fast image recognition is the correct lightning.
    If it´s really bad, all the best filtering can only do mediocre results. The same apply if the usb camera don´t have the needed controls for contrast/... selection .
    One alternative is that someone, maybe i sell the hw and lightning and than it´s mostly a simple plug and play, presuming 100$ for tree usb cameras including illumination.
    Post tree different images taken from the real p&p, not just webcam with undefined ligthning, package type and i post the application/code/dll, whatever for the image recognition.
    The image should not be compressed with jpeg, bmp works better if taken as snapshoot from the camera. Please add a image of a standard newspaper and one with a color foto of a persons face
    in order that i can estimate/see the constrast and brigthness settings used.

  7. #707
    Join Date
    Sep 2006
    Posts
    70
    As no one has respond, even i was absent a long time here, i post the relevant code, two images and one example application. The timing is only indicative, there is a lot of memory
    alloc, inefficient image painting, ... . It was just a test. On real usage, the algorithm has the location and rotation in less then 2ms after it have acquired the image. The image must be given as argument to the executable.


    cvConvertImage(img,tmp,CV_BGR2GRAY);
    cvEqualizeHist(tmp,tmp);
    cvThreshold(tmp,tmp,252,255,CV_THRESH_BINARY);

    static CvMat *xsum=cvCreateMat(1,tmp->width,CV_32FC1);
    static CvMat *ysum=cvCreateMat(tmp->height,1,CV_32FC1);

    cvDilate(tmp,tmp,0,1);
    cvErode(tmp,tmp,0,1);
    bwareaopen(tmp,15);

    cvReduce(tmp,xsum,0,CV_REDUCE_SUM);
    cvReduce(tmp,ysum,1,CV_REDUCE_SUM);
    Attached Thumbnails Attached Thumbnails rot_uplooking.bmp   uplooking.bmp  
    Attached Files Attached Files

  8. #708
    Join Date
    Sep 2006
    Posts
    70
    I was asked how to check the ic rotation with this histograms, so i post two image to clearify it.
    For the IC, use the two high peaks from the X histogram and combine it with the start and end of the Y histogram and you get two points that give you the rotation.
    For resistors, use the center of the histogram bars and you get the same two points for calculating the rotation.
    The center is always calculated using the center of the horizontal/vertical histogram and divide it by two.
    There are always some security bits, for checking the center, only the withe bar is used, it´s defined as 20% of the max histogram value.
    The same apply for the init/end of the vertical histogram for pin finding, where a fixed value is used.

    After getting the relevant infos, you could mask the image with negative mask of the used ic pads and calculate the maximum value using cvMinMaxLoc as example or cvAvg and
    if you get a result greater then zero, somethings is wrong and annother algorithm should be used if available or put the component on the discard bin.
    Further adding the mask database to the good part shape using a weigthed approach in order to disqualify the parts that don´t are conform to the good part average match.

    Annother type of calc using lookup table is faster and require a database of components. If knowing the dimesion of the component , then it´s simpler to make a lookup table
    of it´s with and maybe or as alternative max histogram value and averaged one. This compares the width and height of the compent and based on the value, it gives out the rotation of the component. The size of such a table is 16Kbyte per part, for 128 parts it´s 2 Mbyte without interpolation. If using dual approach, width*height for finding the coarse offset and
    xmax/xmin for fine rotation offset, then the memory and time required for the lookup is minimised.

    Part rotation can be performed in memory as alternative for doing it on hardware. On certains conditions it´s faster or better, as exemple if doing shape based lookup.
    Unless to say, you can switch from one type of part lookup to the next after some samples are aquired, but if low speed hw is used, maybe that effort is not needed because
    the movement time is very long.

  9. #709
    Join Date
    Jan 2007
    Posts
    148
    i'm just starting with looking at vision and rotation to placement using opencv of pick and place , has anyone any code to share or ideas etc , as opencv is new to me .
    i'm certianly interested in working with others to make this an open source project , should others be interested

  10. #710
    Join Date
    Sep 2006
    Posts
    11
    Have you looked at openpnp at googlegroups?

  11. #711
    FYI: Here is a a PDF I found with graphics explaining the Laser Aligners. I would be surprised if we couldn't make this today with some cheap parts. A 1D CCD array, a laser pointer thingy with a lens to make it a line, and some processor to process it. I've been doing PSOC 5LP and they are pretty cheap and powerful and even have programmable analog components. With 3D printers now, a suitable enclosure can be done. I just might have too...

    http://www.bpmmicro.com/wp-content/u...LA_EN_0703.pdf

  12. #712
    Here's the CCD element:
    http://www.eureca.de/datasheets/01.x.../TCD1205DG.pdf

    Here are the line lasers:
    Line Generating Lasers

    Can anyone suggest which of those line lasers would be best? I've played with a few, but I am not an optics pro. These have a fanout between 60 and 90 degrees. The CCD sensor is about 1.125" (28.6mm) @ 2048 pixels. So we want to measure something max w/h of 1.125" square. The CCD sensor is sensitive to infrared light.

    So what color laser? red, green, violet, infrared?

    [edit] Wait a minute....a line laser with a fanout is going to generate a shadow with a fanout. Obviously, at the opposite end of the CCD will have to be some sort of light pipe that absorbs the laser light and diffuses it. The CCD element is then reading this diffused light.

  13. #713
    Join Date
    Sep 2006
    Posts
    70
    The gray level Filter required to not saturate the ccd element , Watten ND 3.0/4.0 cost times more then alternative solutions.

  14. #714
    Thanks nisma, but can you explain more about what "Watten ND 3.0/4.0 cost times more then alternative solutions." I googled Watten ND gray level filter" but got nothing (except your post...wow google, you're fast!)

    With a diffuse light pipe I wouldnt think the CCD would saturate. It isn't getting direct laser light. Perhaps there is a better way to collimate the beam then diffusing it. Again, I'm not an optics export so any details/help here is much appreciated.

    C

  15. #715
    Join Date
    Sep 2006
    Posts
    70
    Kodak Wratten filter, sorry, 3"x3" Size sheet is 80$/75€ + shipping. I have tested NEC µPD8880CY with Leds and i have needed the ND filter.
    I have asked a photographer to give me medium size diapositiv film making a image of paper using flash and underexposed the whole thing for 2 steps.
    So the cost whas a beer.
    Principally there are designed for reading reflected light on paper, so you must account for light reduction if you want expose it directly on light.

  16. #716
    Susan673 Guest

  17. #717
    Join Date
    Sep 2006
    Posts
    70
    The overall rough sketch.
    View image: ccd

    I have a source for this sensors if you need it. 8Mhz max clock rate.

  18. #718
    Hi nisma, So you are thinking there should be multiple lasers? I've seen the patent on this by cyberoptics. They say something about improved resolution by cycling the lasers and analyzing the shadows. I could only read the abstract, but I assume each laser provides a different silhouette due to different angles and these are combined into 1 profile that eliminates the shadow in software to get what a single ideal collimated laser light would give.

    Why the different colors on the sides though? Wouldn't one color matching the filter be better?

    FYI I have 10 of the TCD1205DG sensors on their way now.

  19. #719
    Join Date
    Sep 2006
    Posts
    70
    Instead of laser, you could use narrow angle leds. The displayed image sensor have 2x R + 2xG +2xB sensors, thats why i have used RGB leds.
    Further the red leds are two in order that i can measure wider body.
    The key for this is not to detect the edge, this is wrong, even in image processing. If you have a image in image processing, make (gaussian) blur, and then do the threshold/canny/.... ,
    otherwise you have too many falses. The same is here, only that the blur is caused by optical diffraction on the edge of the component.
    Said that, you have 25/22mm (i have not full read the datasheet) of image sensor. Using one red led you could sense max 7mm . This values are fantasy values just to show the principle.
    If you have a 7x7mm component , then the opstacle for the led/laser line is 8mm because you must rotate the part in order to know what angle you detect.
    Having multiple lasers or detect the component at different angles are the same. this can be extend to 9mm for the 7x7mm component including pins.
    If you would detect bigger parts, instead of one laser line or narrow led you could illuminate the component from the side in order that the shadow goes near the center with some gaps beetween
    the two. this is important if using a bw sensor, if you can detect different colors this is not critical. For leds, having two red leds, instead of one, you can measure 10mm.
    Instead of detecting tree colors, you could switch the laser on and off from the used microcontroller. Using side illumination, you could now detect parts up to 30mm . If you have only one side
    illumination and not twoo, you could rotate the part in order to compensate for the missing detection. Rotate only in one dimension, account for this before the pickup, otherwise you have backlash
    that you cannot estimate/compensate and then you need closed loop servo that drives the cost up. I hope this explain something, otherwise ask further the things you don´t understand.

    If you choose to detect parts up to 7mm and bigger parts are centered using uplooking camera, you don´t need the side illumination.
    I have used this and illustrated, because the used sensor can detect the colors at the same time. having BW sensor, maybe forget the discussion about side illumination,
    because bigger part are less frequent and can be centered with uplooking camera.

    Annother thing to consider for laser is that laser line is not uniform. http://img43.imageshack.us/img43/6474/img0409n.jpg .
    Eventually, you must rotate the diode in order to find a sufficent reasonable image line. Project the line to a wall with paper attached and check uniformity.

  20. #720
    Join Date
    Sep 2006
    Posts
    70
    Quote Originally Posted by cncbasher View Post
    i'm just starting with looking at vision and rotation to placement using opencv of pick and place , has anyone any code to share or ideas etc , as opencv is new to me .
    i'm certianly interested in working with others to make this an open source project , should others be interested
    This is an part of code, and different people code differnt.
    I have ended up in something like this and i code in C for speed reasons, i´m sure 99% want code it in c++ or c# or java or maybe python with nump and surf/sirf/ransac/... .

    if(i=param(0,"close pixel value")) cvDilate(tmp1,tmp2,0,i),cvErode(tmp2,tmp2,0,i),Sho w(close,tmp2); else CV_SWAP(tmp2,tmp1,swap);
    if(i=param(0,"open pixel value ")) cvErode(tmp2,tmp2,0,i),cvDilate(tmp2,tmp2,0,i),Sho w(open,tmp2);
    if(i=param(0,"close pixel value")) cvErode(tmp2,tmp2,0,i),cvDilate(tmp2,tmp2,0,i),Sho w(open,tmp2);
    if(i=param(0,"normalize ?" )) cvEqualizeHist(tmp2,tmp2),Show(equalize,tmp2);
    if(i=param(0,"blur kernel size ")) cvSmooth(tmp2,tmp2,CV_GAUSSIAN,i,i),Show(Smooth,tm p2);

    param is a integer array with parameters, param(0) automatically advance to the next index, the text is just a hint for stepping trought the code on the debug output,
    as same like Show displays the resulting image, the first parameter of Show is expandet to string, and is only executed on stepping trought the code.
    The paramter used are many, some just for stepping trought the process in order to display better the results.

    Other prefer to have Python code and execute the P-Code every time. What is you´r preference and what type of help for the Code you need.

Page 36 of 37 2634353637

Similar Threads

  1. Newbie - To build or not to build Router/Plasma Table
    By dfranks in forum Waterjet General Topics
    Replies: 10
    Last Post: 04-08-2011, 05:16 AM
  2. NEW BUILD: PVC as a build material
    By Smiler in forum DIY CNC Router Table Machines
    Replies: 12
    Last Post: 11-13-2009, 11:57 PM
  3. New Large Table Build in Houston, TX (Build Log)
    By anitel in forum Plasma, EDM / Other similar machine Project Log
    Replies: 12
    Last Post: 12-30-2008, 09:45 AM
  4. Replies: 4
    Last Post: 08-16-2005, 11:46 PM

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •