This post is just to give an outline of how I plan to create the XY mechanism of this LEGO Photocopier device. The mechanism I plan to use is known as the Delta Robot. It consists of three servos that will be attached to the Z mechanism by three pivoting arms. Do not fret, a plethora of illustrations will be provided to offset this lackluster description. I’ll break the process down into three production phases: research, prototype, and integrate.
This stage will consist mostly of figuring out the mathematics required to control the Delta Robot. Basically, I must figure out how turn three angles (one for each servo) into an (x,y,z) position for the end effector, which in this case will be the Z mechanism. And now for the aforementioned illustration, see Figure 1.
Each servo will be set to a certain position, thus achieving the desired angle, and this will, in theory, move the end effector to a certain coordinate.
This phase will be mostly mechanical, with a little bit of programming involved after the mechanism is built to make sure that it has the range of motion required for the task of moving around LEGO elements. Servos will be used to move the arms, and the arms themselves will be made of either LEGO MINDSTORMS or TETRIX elements. The end effector will have to be modified depending on how the Z mechanism needs to attach to it. You may view the labeled prototype drawing, as seen in Figure 2.
A program which allows an input of (x,y,z) coordinates and outputs a position for each of the servos will have to be created, in order to make sure that the Delta Robot has a full range of motion, and can move LEGO elements around with ease.
This task will involve integrating the Delta Robot and its program with the rest of the project, which involves vision processing, as well as the Z mechanism. The Z mechanism will have to be attached to the end effector of the Delta Robot, and its controls will have to be integrated into the program so that it can pick up and place LEGO elements appropriately. Previously, an (x,y,z) coordinate would have to be manually input into the program, but with vision processing enabled, it will be supplied depending on the image that must be replicated. This means that the program will have to be integrated into the vision processing program, so changes will have to be made to it. Hopefully this phase was clear enough and does not require an illustration, as my patience reserved for drawing has all but dried up at this point.
So that is the magnificent plan, it will be interesting to see how much it changes between now and the end of the project.