The EGB339 Assessment 2 involves the implementation of a computer vision-based solution for a pick-and-place task using a physical robot arm. This report outlines the solution methodology, algorithms used, and the implementation details of the project.
Problem Statement
The task is to design and implement a system where a robot arm can detect multiple objects in its workspace, determine their locations and shapes, and then pick them up to place them at specified target locations. The project involves several key components including object detection, image processing, coordinate transformation, inverse kinematics calculations, and robot control.
Solution Overview
The solution involves several steps:
Calibration and Image Preprocessing:
The system starts with a calibration step where a calibration image with distinct markers is used to calculate a homography transformation between the image and the robot's XY plane.
The calibration markers are detected using computer vision techniques, and their positions are sorted in clockwise order.
Object Detection and Shape Classification:
The robot's workspace image is analyzed to detect objects of interest based on their color (red, green, blue).
Each detected object is classified into a shape (circle or square) using shape classification algorithms.
Coordinate Transformation:
The detected object positions in the image are transformed to the robot's XY plane coordinates using the previously calculated homography matrix.
This transformation allows mapping the image coordinates to real-world robot coordinates, facilitating the pick-and-place task.
Pick-and-Place Procedure:
For each object type and its corresponding target position specified in the input dictionary, the system executes the pick-and-place procedure.
The robot arm moves to a position above the target location, lowers the suction gripper to pick up the object, moves to the target position, and releases the object.
Safety and Error Handling:
Safety measures and error handling are implemented to prevent collisions, ensure proper lifting of the gripper, and handle abnormal conditions.
Error margins are defined to ensure accurate placement of objects within a tolerance of 20mm.
Video Example
Implementation Details:
The solution is implemented in Python 3.9 using various libraries including OpenCV, machinevisiontoolbox, numpy, and CoppeliaRobot API.
The `PickAndPlaceRobot` function orchestrates the overall pick-and-place process, coordinating object detection, shape classification, coordinate transformations, and robot control.
Image preprocessing techniques such as median filtering and dilation are applied to enhance object detection accuracy.
Inverse kinematics calculations are performed to determine the joint angles required to move the robot arm to desired positions.
Safety guidelines are followed to ensure proper handling of objects and avoid collisions during the operation.
Conclusion:
The proposed solution successfully addresses the requirements of the EGB339 Assessment 2, demonstrating an effective computer vision-based approach for a pick-and-place task using a physical robot arm. By integrating image processing, coordinate transformations, and robot control techniques, the system achieves accurate object manipulation within the specified workspace.
Because this is a University project the project code must remain in a private repository. But I can share a link upon request.