1 Dobot

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 4

1 Dobot

The Dobot Magician Robot Arm is multi-task robot arm that has black and white body with
consistent quality at fingertips Uni body design, un box and enjoy right away Designed for
desktop, safe and easily lifted. It contains a Vacuum pump Kit , Gripper, Writing and Drawing
Kit, 3D Printing Kit, and other accessories. It has many features like writing, drawing, 3D
printing and even playing chessing with human. Dobot Magician is an All-in-One Robot arm for
Education. It can be controlled by PC, phone, gesture or voice since it can communicate by USB,
Wifi and Bluetooch. Dobot Magician has 4 axes while its maximum reach is 320mm and its
position repeatability(control) is 0.2mm. Dobot magician has 13 extension ports, 1
programmable key and 2MB offline command storage. One of dobots main characters is its
various end effectors, such as 3D Printer Kit, pen holder, vacuum suction cap, gripper and laser.
The softwares for dobot magician are Dobot Studio(for Windows XP, win7 SPI, win8/win10,
mac osx 10.10 and mac osx 10.11, mac osx 10.12), Repetier Host, GrblController3.6,
DobotBlockly (Visual Programming editor) while the software develop kit is Communication
Protocol, Dobot Program Library. In ROS, dobot doesnt have much function except controlling
the arm by keyboard using package. Dobot magician has 13 extension ports, 1 programmable
key and 2MB offline command storage. One of dobots main characters is its various
endeffectors, such as 3D Printer Kit, pen holder, vacuum suction cap, gripper and laser.Th

1. Motion Planning of Humanoid Robot Arm for grasping task [3]


This paper demonstrates a new method for computing numerical solution to the motion
planning of humanoid robot arm. The method is based on combination of two nonlinear
programming techniques which are Forward recursion formula and FBS method.
2. Autonomous vision-guided bi-manual grasping and manipulation [4]
This paper describes the implementation, demonstration and evaluation of a variety of
autonomous, visionguided manipulation capabilities, using a dual-arm Baxter robot. It starts
from the case that human operator moves the master arm and slave arm follows the master
arm, and then, it combines an image-based visual servoing scheme with the first case to perform
dual-arm manipulation without human intervention.
3. Arm grasping for mobile robot transportation using Kinect sensor and kinematic analysis [1]
In this paper, we describe how the grasping and placing strategies for 6 DOF arms of a H20
mobile robot can be supported to achieve a high precision performance for a safe
transportation. An accurate kinematic model has bean used in this paper to find safe path from
one pose to another precisely.
4. A new method for mobile robot arm blind grasping using ultrasonic sensors and Artificial Neural
Networks [2]
The paper presents a new method to realize mobile robot arm grasping in indoor laboratory
environments. This method adopts a blind strategy, which does not need the robot arms be
mounted any kind sensors and avoid calculating the complex kinematic equations of the arms.

The method includes:


(a) two robot on-board ultrasonic sensors in base are utilized to measure the distances
between the robot base and the front arm grasping tables;
(b) an Artificial Neural Networks (ANN) is proposed to learn/establish the nonlinear relationship
between the ultrasonic distances and the joint controlling values. After executing the
training step using sampling data, the ANN can forecast/generate the next-step joint
controlling values fast and accurately by inputting a new pair of realtime ultrasonic
measured distances;
(c) to let the blind strategy matching with the transportation process, an arm controlling
component with user interfaces is developed; (d) a method named training arm is adopted
to prepare the training data for the training procedure of the ANN model. Finally, an
experiment proves that the proposed strategy has good performance in both of the
accuracy and the real-time computation, which can be applied to the real-time arm
operations for the mobile robot transportation in laboratory automation.
5. Find-object
Find-object is a Simple Qt interface to try OpenCV implementations of SIFT, SURF, FAST, BRIEF
and other feature detectors and descriptors.
it have many features like:
(a) You can change any parameters at runtime, make it easier to test feature detectors and
descriptors without always recompiling.
(b) Detectors/descriptors supported (from OpenCV): BRIEF, Dense, FAST, GoodFeaturesToTrack,
MSER, ORB, SIFT, STAR, SURF, FREAK and BRISK.
(c) Sample code with the OpenCV C++ interface below...
(d) For an example of an application using SURF descriptors: see my project RTAB-Map (an
appearance-based loop closure detector for SLAM).
2.2. Yi Yang
1. Dynamic Sensor-Based Control of Robots with Visual Feedback This paper describe the
formulation of sensory feedback models for systems which incorporate complex mappings
between robot, sensor, and world coordinate frames. These models explicitly address the use of
sensory features to define hierarchical control structures, and the definition of control strategies
which achieve consistent dynamic performance. Specific simulation studies examine how
adaptive control may be used to control a robot based on image feature reference and feedback
signals.
2. Vision-based Motion Planning For A Robot Arm Using Topology Representing Networks
This paper describe the concept of the Perceptual Control Manifold and the Topology
Representing Network. By exploiting the topology preserving features of the neural network,
path planning strategies defined on the TRN lead to flexible obstacle avoidance. The practical
feasibility of the approach is demonstrated by the results of simulation with a PUMA robot and
experiments with a Mitsubishi Robot.
3. A Framework for Robot Motion Planning with Sensor Constraints This paper propose a motion
planning framework that achieves this with the help of a space called the perceptual control
manifold (PCM) defined on the product of the robot configuration space and an image-based
feature space. They show how the task of intercepting a moving target can be mapped to the
PCM, using image feature trajectories of the robot end-effector and the moving target. This
leads to the generation of motion plans that satisfy various constraints and optimality criteria
derived from the robot kinematics, the control system, and the sensing mechanism, specific
interception tasks are analyzed to illustrate this visionbased planning technique.
4. Motion Planning of a Pneumatic Robot Using a Neural Network
This paper present a framework for sensor-based robot motion planning that uses learning to
handle arbitrarily configured sensors and robots. Autonomous robotics requires the generation
of motion plans for achieving goals while satisfying environmental constraints. Classical motion
planning is defined on a configuration space which is generally assumed to be known, implying
the complete knowledge of both the robot kinematics as well as knowledge of the obstacles in
the configuration space. Uncertainty, however, is prevalent, which makes such motion planning
techniques inadequate for practical purposes. Sensors such as cameras can help in overcoming
uncertainties but require proper utilization of sensor feedback for this purpose. A robot motion
plan should incorporate constraints from the sensor system as well as criteria for optimizing the
sensor feedback. However, in most motion planning approaches, sensing is decoupled from
planning. A framework for motion planning was proposed that considers sensors as an integral
part of the: definition of the motion goal. The approach is based on the concept of a Perceptual
Control Manifold (PCM), defined on the product of the robot configuration space and sensor
space. The PCM provides a flexible way of developing motion plans that exploit sensors
effectively. However, there are robotic systems, where the PCM cannot be derived analytically,
since the exact mathematical relationship between configuration space, sensor space, and
control signals is not known. Even if the PCM is known analytically, motion planning may require
the tedious and error prone process of calibration of both the kinematic and imaging
parameters of the system. Instead of using the analytical expressions for deriving the PCM, they
therefore propose the use of a self-organizing neural network to learn the topology of this
manifold. They first develop the general PCM concept, then describe the Topology Representing
Network (TRN) algorithm they use to approximate the PCM and a diffusion-based path planning
strategy which can be employed in conjunction with the TRN. Path control and flexible obstacle
avoidance demonstrate the feasibility of this approach for motion planning in a realistic
environment and illustrate the potential for further robotic applications

You might also like