What defines a robot?
Just kidding, I won’t get into that! However, I wanted to share some of what I learned from a lovely “introductory” robotics course I took at Columbia University, taught by Dr. Matei Ciocarlie. This course was full of twists and turns for someone without any robotic navigation experience, but was a magical way to describe movement using math and REFERENCE FRAMES. This is explored using ROS in Python.
If you are taking this class now, DO NOT KEEP READING. The code used here is likely not what your assignments will use and it will just confuse you. Please ask your friendly neighborhood TAs.
Part 0: Setting Up ROS
This process differs for everyone, and unfortunately can be so difficult that it is the reason many people drop the class. Below is a description of how to install the ROS environment in Linux, specifically Ubuntu.
- Install Operating System, Ubuntu 16.04 64-bit (I used VirtualBox, 8GB RAM and 15GB Disk – but I recommend NOT doing this and installing the OS on a dedicated machine as later projects are RAM intensive and will not “pass” grading on the virtual machine)
- Confirm installation of Python 2.7
-
python -V
-
- Install Other ROS Dependencies
sudo apt install python-rosdep python-rosinstall python-rosinstall-generator python-wstool build-essential
sudo apt install catkin
- Install ROS Kinetic Kame (using linked instructions can be helpful)
- Setup to accept software from packages.ros.org:
sudo sh -c 'echo "deb http://packages.ros.org/ros/ubuntu $(lsb_release -sc) main" > /etc/apt/sources.list.d/ros-latest.list'
- Key setup:
sudo apt-key adv --keyserver 'hkp://keyserver.ubuntu.com:80' --recv-key C1CF6E31E6BADE8868B172B4F42ED6FBAB17C654
sudo apt-get update
sudo apt-get install ros-kinetic-desktop-full
sudo rosdep init
rosdep update
- Setup to accept software from packages.ros.org:
- ROS Environment Setup
echo "source /opt/ros/kinetic/setup.bash" >> ~/.bashrc
source ~/.bashrc
source /opt/ros/kinetic/setup.bash
orsource devel/setup.bash
- Make a new catkin directory
-
mkdir ~/ros_wkspace_asgn0
mkdir ~/ros_wkspace_asgn0/src
-
cd ~/ros_wkspace_asgn0/
-
catkin_make
-
- Run Environment
- Note: Separate Terminal windows must run each part of the assignment and have setup using setup.bash in the assignment directory where catkin make was used. This includes: roscore, python scripts, and starting RViz
- Navigate to home directory
cd ~/wkspace_asgn0/
- Source
source devel/setup.bash
- Start ROS Core in main window
roscore
- Repeat navigation and sourcing for additional windows such as for the following ROS commands:
rostopic
,rosrun
,roslaunch
- TIP: Occasionally nothing works, which is very frustrating, and sometimes editing bashrc is necessary to fix things.
sudo gedit ~/.bashrc
- Add to bottom of file:
source /opt/ros/kinetic/setup.bash
- TIP: When looking for the autograder file, it’s usually inside the
src/assignmentX/assignmentX
folder - TIP: In each assignment, it is important to understand the structure of what you are transmitting in the assignment. These definitions live inside the project folder as
.msg
files.
Part 1: Using transformations to move between reference frames
A robust and effectively manipulated method of representing orientation is transformation matrices. These matrices are fabulous because they represent both translation and rotation, but these two must be applied in the correct order to accurately represent a transformation. Usually, translation is applied in the original coordinate frame before rotation to the new frame. These transformations are broadcast dynamically, and are used to represent changes in position, orientation, and navigation. In ROS, there is a global frame that may or may not be shown in RViz. From there, every object is generally described in its own frame relative to the global frame.
Around x-axis:
Around y-axis:
Around z-axis:
Translation
or
Transformation Matrix Setup
All operations here are dot products, where order does matter. Below, the transformation matrix T’ is decomposed into translation and rotation matrices.
Some helpful functions when creating and working with reference frames are: tf.transformations.quaternion_from_euler()
where euler format is (roll, pitch, yaw), tf.transformations.quaternion_matrix()
, tf.transformations.quaternion_about_axis()
, tf.transformations.rotation_matrix()
, tf.transformations.rotation_from_matrix()
, and numpy.dot()
Don’t forget to add any relevant imports to the beginning of your scripts: import tf
, import tf2_ros
, import geometry_msgs.msg
Example 1: Broadcast a transformation from a base frame to an object frame where the transform rotation is (roll, pitch, yaw): (0.5, 0.5, 0) and translation is 1 m on the new x-axis and 2 m on the new y-axis.
First, get started by initializing your geometry .msg object. You must import geometry_msgs.msg
first, usually at the top of your script. This type of message already has attributes assigned by the message format described in the .msg file, and should be inspected. In this case, the components of the .msg has formats of each layer described in documentation online, found by searching on Google. First, in the code below, a TransformStamped object is created which has an attribute of the form Transform which has rotation and translation attributes of the form Quaternion and Vector3 respectively. The structure of this object and structure of each of the attributes is essential for functionality of the broadcast.
bTo = geometry_msgs.msg.TransformStamped()
bTo.header.stamp = rospy.Time.now()
bTo.header.frame_id = "base_frame"
bTo.child_frame_id = "object_frame"
Then, create transformation matrices which will represent each attribute of the object.
bToq1 = tf.transformations.quaternion_from_euler(0.5, 0.5, 0.0, axes='sxyz') bTo.transform.rotation.x = bToq1[0] bTo.transform.rotation.y = bToq1[1] bTo.transform.rotation.z = bToq1[2] bTo.transform.rotation.w = bToq1[3] bTo_transformValues = numpy.dot(tf.transformations.quaternion_matrix(bToq1), [[1.0],[2.0],[0.0],[1]]) bTo.transform.translation.x = bTo_transformValues[0] bTo.transform.translation.y = bTo_transformValues[1] bTo.transform.translation.z = bTo_transformValues[2]
Then, broadcast the new object.
object_frame = bTo
br.sendTransform(object_frame)
Example 2: Create messages for two additional transformations, base to a new object named robot and a transformation from robot to a camera frame.
- The transformation from base to robot is a 2 radian rotation around the z-axis with a following translation of -1.5 m along the resulting y-axis.
- The transformation from the robot to camera is more complicated, the translation component is 0.5 in the x direction and 0.5 in the z direction – but the rotation should result in the camera pointing in the direction of the object.
The relationship between the different objects can feel somewhat abstract, so the relationship between transformation matrices is drawn below.
When inverting, the inverse translation matrix is multiplied to the left of the inverse rotation matrix.
bTr = geometry_msgs.msg.TransformStamped() bTr.header.stamp = rospy.Time.now() bTr.header.frame_id = "base_frame" bTr.child_frame_id = "robot_frame" bTrq1 = tf.transformations.quaternion_about_axis(2.0, (0, 0, 1)) bTr.transform.rotation.x = bTrq1[0] bTr.transform.rotation.y = bTrq1[1] bTr.transform.rotation.z = bTrq1[2] bTr.transform.rotation.w = bTrq1[3] bTr_transformValues = numpy.dot(tf.transformations.quaternion_matrix(bTrq1),[[0.0],[0.0],[-1.5],[1]]) bTr.transform.translation.x = bTr_transformValues[0] bTr.transform.translation.y = bTr_transformValues[1] bTr.transform.translation.z = bTr_transformValues[2] robot_frame = bTr br.sendTransform(robot_frame) rTc = geometry_msgs.msg.TransformStamped() rTc.header.stamp = rospy.Time.now() rTc.header.frame_id = "robot_frame" rTc.child_frame_id = "camera_frame" rTc.transform.translation.x = 0.5 rTc.transform.translation.y = 0.0 rTc.transform.translation.z = 0.5 #Calculation of rotation quaternion #To get Taim, multiply inv(rTc) inv(bTr) (bTo) #Separate Rotation and Translation matrix for each. For inv, put translation to the left of rotation. #rTc translation inverse rTcTrInv = tf.transformations.inverse_matrix(tf.transformations.translation_matrix((0.5, 0.0, 0.5))) #bTr translation inverse bTrTrInv = tf.transformations.inverse_matrix(tf.transformations.translation_matrix((0.0, 1.5, 0.0))) #bTo translation bToTr = tf.transformations.translation_matrix((1.5, 0.8, 0.0)) #bTr rotation inverse referring to bTrq1 quaternion calculation above bTrRotInv = tf.transformations.inverse_matrix(tf.transformations.quaternion_matrix(bTrq1)) #bTo rotation referring to bTrq1 quaternion calculation above bToRot = tf.transformations.quaternion_matrix(bToq1) #multiply all the matrices (robotToCameraInv*baseToRobotPositionInv*baseToRobotRotationInv*baseToObjectRotation*baseToObjectPosition cTo = numpy.dot(numpy.dot(numpy.dot(numpy.dot(rTcTrInv, bTrTrInv), bTrRotInv), bToRot), bToTr)
The base to object transformation is known, but the rotation portion of the object to camera transformation is unknown and must be determined. Meaning, the green arrow below is known, but the rotation from the dark blue orientation to the new light blue orientation is unknown (shown in red as a rotation around an axis, w).
However, we have a handy trick that can be used alongside our handy tf and tf2 toolkits based on finding a rotation angle and rotation axis. As we learned in geometry class, it is possible to find a vector perpendicular to two vectors by using a cross product (always be sure to check that you have the one pointing the right way using the left hand rule!) and the shortest angle between two vectors. Any of the origin vectors may be used for this, with the x-axis used in the example below.
With these formulas, we can calculate the rotation needed to complete the transformation. In the code below, the x-axis is also used.
#take position vector from matrix cToVect = tf.transformations.translation_from_matrix(cTo) cTo_rot_axis = numpy.cross([1,0,0],cToVect) cTo_rot_rad = math.acos((numpy.dot([1,0,0],cToVect))/(numpy.linalg.norm([1,0,0])*numpy.linalg.norm(cToVect))) rTcq1 = tf.transformations.quaternion_about_axis(cTo_rot_rad, cTo_rot_axis) rTc.transform.rotation.x = rTcq1[0] rTc.transform.rotation.y = rTcq1[1] rTc.transform.rotation.z = rTcq1[2] rTc.transform.rotation.w = rTcq1[3]
Part 2: Transmitting Joint Transforms Using Forward Kinematics
In this part, the goal is to broadcast all of the transforms from the base root link to each joint in the chain. This is an extremely important concept because it shows the iterative nature of these calculations. First, the rotation or translation values of each joint are listed and then the values corresponding to each joint are used to calculate and broadcast successive transforms. For each part, the correct joint names must be correctly ordered as well, so the joint name lookup should be used to find values.
The first portion involves listing all the links and joints by getting the root link and following the child nodes to list out the joint names in the correct order. This information is dependent on the structure of the robot that is being built, and uses files available online.
First, the URDF Parser. URDF means Unified Robot Description Format, and most robots that are commercially available include this description from the manufacturer. The parser for python takes this file and enables a few handy calls that are essential for getting information about the robot and how each joint of the robot is related to the others. In later projects, this is important for any inverse kinematics and movement. In this case, get_root() and robot.child_map are useful for looking up each link.
The code below is a function called on a URDF robot object, with the attributes mentioned above.
import numpy import geometry_msgs.msg import rospy from sensor_msgs.msg import JointState import tf import tf.msg import tf2_ros from urdf_parser_py.urdf import URDF #make broadcaster broadcaster = tf2_ros.TransformBroadcaster() #initialize joints_list = [] links_list = [] more_links = True #make a list of all the links and joints link = self.robot.get_root() world = self.robot.get_root() while more_links == True: (next_joint_name, next_link) = self.robot.child_map[link][0] joints_list = joints_list + [next_joint_name] links_list = links_list + [next_link] link = next_link if link not in self.robot.child_map: more_links = False endeffector = next_link num_link = len(links_list)
Once an ordered list of links is built, the transformation matrices are built using an iterative approach. The identity matrix is used as an initialization for the root of the robot, and then depending on if the joint is revolute or prismatic, the appropriate transformation matrices can be multiplied in. After broadcasting the update for the joint, the next link is formed and multiplied in to generate the next joint broadcast.
#initialize tfMat with identity tfMat = tf.transformations.identity_matrix() #initialize again for transformations link = self.robot.get_root() #loop through all frames in the child map for x in range(num_link): #name of the frame after root (next_joint_name, next_link) = self.robot.child_map[link][0] #pull out object to get attributes from it next_joint = self.robot.joint_map[next_joint_name] #look up joint type if next_joint.type == 'revolute': #look up index of joint_values with matching name. Link names are used. i = joint_values.name.index(joints_list[x]) joint_position = joint_values.position[i] #multiply tfMat_Trans = numpy.dot(tfMat, tf.transformations.translation_matrix(next_joint.origin.xyz)) tfMat_Rot = numpy.dot(tfMat_Trans, tf.transformations.euler_matrix(next_joint.origin.rpy[0],next_joint.origin.rpy[1],next_joint.origin.rpy[2])) #note, no axis defined. tfMat_Final = numpy.dot(tfMat_Rot, tf.transformations.rotation_matrix(joint_position, next_joint.axis)) message = convert_to_message(tfMat_Final,next_link,world) else: #aka 'fixed' joint type #set rotation to identity tfMat_Trans = numpy.dot(tfMat, tf.transformations.translation_matrix(next_joint.origin.xyz)) tfMat_Rot = numpy.dot(tfMat_Trans, tf.transformations.euler_matrix(next_joint.origin.rpy[0],next_joint.origin.rpy[1],next_joint.origin.rpy[2])) #no axis defined. tfMat_Final = tfMat_Rot message = convert_to_message(tfMat_Final,next_link,world)
Part 3: Cartesian Control and numerical Inverse Kinematics
So you can tell a robot where to put its joints given joint angles, but how do you know what joint angles to broadcast to achieve a specific goal? Inverse kinematics is the process of taking a target position of one of the joints, say the end effector of the robot, and calculating the joint positions necessary to achieve that position.
There are three functions of particular importance in this process. First is a function that can be used to calculate a Jacobian given the transform from base to end effector and each of the joint transforms. A Jacobian is formed for a robot with independent joints with individual columns representing each joint transform. The background on this concept is well documented, but basically the Jacobian is a matrix that relates the way small movements in joint positions effect the movement of another joint using linearization.
Using the Jacobian, the resulting velocity of an end effector can be described by multiplying the important parts of the velocity matrix for a particular joint by the velocity of that joint. In the equation below, the velocity of the end effector is the product of the velocity transmission matrix and the joint velocity
We will go ahead and only use the translation portion of the velocity at the end effector, and neglect the rotation, by only using the top part of the matrix.
In addition, the initiating joints are usually revolute joints which conventionally have the z-axis aligned with the rotation axis. Therefore, the angular velocity is limited to the z-axis. As a result, the last column of the velocity matrix alone can sufficiently convey the impact of that joint on the end effector position.
For all the joints combined, the Velocity matrix would be the sum of each of the relevant portion of the velocity matrices for each joint. This can be conveniently summarized by appending each column to form a matrix multiplied by each of the joint velocities, .
The Jacobian is the part that is multiplied by joint velocities to get the end effector velocity.
Resulting in a very important robotics fundamental equation:
This calculation can be seen in the code below.
'''This is a function which will assemble the jacobian of the robot using the current joint transforms and the transform from the base to the end effector (b_T_ee). Both the cartesian control callback and the inverse kinematics callback will make use of this function. Usage: J = self.get_jacobian(b_T_ee, joint_transforms)'''
def get_jacobian(self, b_T_ee, joint_transforms):
J = numpy.zeros((6,self.num_joints))
#--------------------------------------------------------------------------
#FILL IN YOUR PART OF THE CODE FOR ASSEMBLING THE CURRENT JACOBIAN HERE
#FUNCTION SKEW SYMMETRIC of x, y, z
def skew_symmetric(x,y,z):
SS = numpy.zeros((3, 3))
SS[0,1] = -z
SS[0,2] = y
SS[1,0] = z
SS[1,2] = -x
SS[2,0] = -y
SS[2,1] = x
return SS
#FUNCTION TO GENERATE VELOCITY MATRIX from j_T_ee
def velocity_matrix(j_T_ee):
# initialize V_matrix
V_matrix = numpy.zeros((6,6))
# get inverse, ee_T_j
ee_T_j = tf.transformations.inverse_matrix(j_T_ee)
# calculate R (ee_R_t)
R = ee_T_j[0:3, 0:3]
# calculate t (-1, ee_R_j, and skew symmetric of j_T_ee_t)
j_T_ee_t = j_T_ee[0:3,3]
t = numpy.dot(-1, R, skew_symmetric(j_T_ee_t[0],j_T_ee_t[1], j_T_ee_t[2]))
#fill in V matrix
V_matrix[0:3,0:3] = R
V_matrix[3:6,3:6] = R
V_matrix[0:3,3:6] = t
return V_matrix
#INITIALIZE ALL ARRAYS AND VARS velocity_transforms, j_T_ee_list, AND i
velocity_transforms = []
j_T_ee_list = []
#FILL IN j_T_ee_list
# loop to append j_T_ee after calculating (b_T_j^-1)(b_T_ee)
for i in range(0,self.num_joints):
j_T_ee_list.append(numpy.dot(numpy.linalg.inv(joint_transforms[i]),b_T_ee))
#FILL IN J BY CALCULATING Vi_mat FOR EACH JOINT AND USING joint_axis TO MAKE Vi_col
# transform to vj, multiply by axis to get column of J, replace column i of J with this and increment i
for i in range(0,self.num_joints):
Vi_mat = velocity_matrix(j_T_ee_list[i])
Vi_col = numpy.dot(Vi_mat[:,3:6], self.joint_axes[i])
#ADD JOINT COLUMN TO JACOBIAN for each j_T_ee
J[:, i] = Vi_col
#RETURN J below
#--------------------------------------------------------------------------
return J
Next, the cartesian_command function reads the commands of type cartesian_control/CartesianCommand. When the .msg file is inspected, it is of the format:
geometry_msgs/Transform x_target bool secondary_objective float32 q0_target
These are familiar attribute structures.
The function that uses this message is quite complex and does a number of things including establishing the desired movement, populating the Jacobian using the get_jacobian function, taking the inverse of the Jacobian multiplied by the desired change in end effector position (delta x, as a linearized representation of ) to get the individual joint velocities.
There is also a portion regarding a secondary goal, in which the base orientation is constrained, which adds another layer of excitement. This is achieved by adding in the nullspace projection of the secondary objective to the solution. Because the projection is in the null space, the end effector position is not altered. Finally, there are limitations to how smooth this movement is if the velocities are not controlled, therefore there are scaling steps incorporated to control large joint velocities that would otherwise make the robot appear shaky.
In this function, a callback, that is executed when the Cartesian control channel receives a new command with a secondary objective and target, will publish joint velocities to the /joint_velocities topic.
def get_cartesian_command(self, command): self.mutex.acquire() #-------------------------------------------------------------------------- #FUNCTION TO MAKE MATRIX FROM A tf FRAME OBJECT def transform_to_matrix(tf_frame): # pull out rotation (quaternion) and make into x_rot x_rot_q = [tf_frame.rotation.x, tf_frame.rotation.y, tf_frame.rotation.z, tf_frame.rotation.w] x_rot = tf.transformations.quaternion_matrix(x_rot_q) # pull out translation x_trans = tf.transformations.translation_matrix([tf_frame.translation.x, tf_frame.translation.y, tf_frame.translation.z]) # get final T from (x_trans * x_rot) mat_x_target = numpy.dot(x_trans, x_rot) return mat_x_target #INITIALIZE # initialize delta_x delta_x = numpy.zeros((6,1)) #FILL IN self KNOWNS self.joint_velocity_msg.name = self.joint_names joint_values = self.q_current #USE FWK TO GET joint_transforms AND b_T_ee FOR CURRENT POSITION joint_transforms, b_T_eec = self.forward_kinematics(joint_values) #CALCULATE delta_x # calculate b_T_eed from command b_T_eed = transform_to_matrix(command.x_target) # make transformation eec_T_eed using (b_T_eec^-1)(b_T_eed) eec_T_eed = numpy.dot(numpy.linalg.inv(b_T_eec), b_T_eed) # fill in delta_t (first three rows of delta_x) delta_x[0:3,0] = eec_T_eed[0:3,3] # fill in delta_r (last three rows of delta_x) angle, axis = self.rotation_from_matrix(eec_T_eed) delta_r = numpy.dot(axis, angle) delta_x[3:6,0] = numpy.transpose(delta_r) #SCALE delta_x TO 0.1m/s and 1rad/sec if delta_x[0] > 0.1 or delta_x[1] > 0.1 or delta_x[2] > 0.1 or numpy.linalg.norm([delta_x[0],delta_x[1],delta_x[2]]) > 0.1: normalize10 = 10*numpy.linalg.norm([delta_x[0],delta_x[1],delta_x[2]]) delta_x[0] = delta_x[0]/normalize10 delta_x[1] = delta_x[1]/normalize10 delta_x[2] = delta_x[2]/normalize10 if delta_x[3] > 1 or delta_x[4] > 1 or delta_x[5] > 1 or numpy.linalg.norm([delta_x[3],delta_x[4],delta_x[5]]) > 1: normalize = numpy.linalg.norm([delta_x[3],delta_x[4],delta_x[5]]) delta_x[3] = delta_x[3]/normalize delta_x[4] = delta_x[4]/normalize delta_x[5] = delta_x[5]/normalize #CALCULATE vee # set gain_p and multiply delta_x by gain gain_p = 1 vee = numpy.dot(gain_p, delta_x) #MAKE JACOBIAN AND PINVERSE # set J J = self.get_jacobian(b_T_eec, joint_transforms) # set pinvJ Jpinv = numpy.linalg.pinv(J, 0.01) #CALCULATE qdot # multiply qdot = pinvJ vee qdot = numpy.dot(Jpinv, vee) #CHECK FOR SECONDARY OBJECTIVE if command.secondary_objective: # initialize delta_q_sec = numpy.zeros((self.num_joints, 1)) # calculate qdot_sec from gain_s and delta_q delta_q_sec[0,0] = command.q0_target - self.q_current[0] gain_s = 3 qdot_sec = numpy.dot(gain_s, delta_q_sec) # project qdot_sec into nullspace by multiplying by I - pinvJ J Jnull = numpy.identity(self.num_joints) - numpy.dot(numpy.linalg.pinv(J), J) qdot_sec_proj = numpy.dot(Jnull,qdot_sec) # reassign qdot to be sum of old qdot and qdot_sec qdot = qdot + qdot_sec_proj #SCALE qdot TO LIMIT TO 1rad/s if qdot.any > 1: qdot = numpy.divide(qdot, numpy.linalg.norm(qdot)) #TRANSMIT # self.velocity_pub.publish(qdot) to publish into /joint_velocities self.joint_velocity_msg.velocity = qdot self.velocity_pub.publish(self.joint_velocity_msg) #-------------------------------------------------------------------------- self.mutex.release()
Finally, the inverse kinematics command publishes the joint transforms needed to achieve the desired pose using a random seed and a progressively more accurate approximation technique similar to Euler’s method. To contain the processing time, there is a timed ten second limit included in the code and a three chances to achieve the goal with a new seed.
The following function is a callback in response to inverse kinematics receiving a command, which contains a desired end effector pose. The callback publishes to the /joint_velocities topic.
def get_ik_command(self, command): self.mutex.acquire() #-------------------------------------------------------------------------- def transform_to_matrix(tf_frame): # pull out rotation (quaternion) and make into x_rot x_rot_q = [tf_frame.rotation.x, tf_frame.rotation.y, tf_frame.rotation.z, tf_frame.rotation.w] x_rot = tf.transformations.quaternion_matrix(x_rot_q) # pull out translation x_trans = tf.transformations.translation_matrix([tf_frame.translation.x, tf_frame.translation.y, tf_frame.translation.z]) # get final T from (x_trans * x_rot) mat_x_target = numpy.dot(x_trans, x_rot) return mat_x_target #INITIALIZE eec_T_eed = numpy.ones((4, 4)) #eec_T_eed[3,0:3] = numpy.zeros((1, 3)) delta_q = numpy.ones((self.num_joints,1)) delta_x= numpy.zeros((6,1)) i = 0 #FILL IN self KNOWNS self.joint_command_msg.name = self.joint_names joint_values = self.q_current #FUNCTION TO CHECK CLOSE OF eec_T_eed def ik_check_close(T): # pull out rotation and translation R_q = tf.transformations.quaternion_from_matrix(T) t = [T[0,3],T[1,3],T[2,3]] if numpy.any(t > 0.001): return False if numpy.any(R_q > 0.001): return False return True #CALCULATE b_T_eed TRANSFORM FROM command b_T_eed = transform_to_matrix(command) #OPEN WHILE LOOP TO START CYCLE COUNTER while i < 3: #START SEED qcurr OF LENGTH self.num_joints AND SET now qcurr = numpy.random.rand(self.num_joints,1) now = rospy.get_rostime().secs #OPEN WHILE LOOP TO START TIMER while rospy.get_rostime().secs - now < 10: #CHECK IF eec_T_eed IS CLOSE ENOUGH, IF SO break if numpy.all(numpy.abs(delta_q) < 0.001): i = 3 break #RUN FWK TO GET NEW joint_transforms AND NEW b_T_eec joint_transforms, b_T_eec = self.forward_kinematics(qcurr) #CALCULATE eec_T_eed as TRANSFORM MATRIX eec_T_eed = numpy.dot(numpy.linalg.inv(b_T_eec), b_T_eed) #CALCULATE delta_x # fill in delta_t (first three rows of delta_x) delta_x[0:3,0] = eec_T_eed[0:3,3] # fill in delta_r (last three rows of delta_x) angle, axis = self.rotation_from_matrix(eec_T_eed) delta_r = numpy.dot(axis, angle) delta_x[3:6,0] = numpy.transpose(delta_r) #MAKE JACOBIAN AND PINVERSE # set J J = self.get_jacobian(b_T_eec, joint_transforms) # set pinvJ Jpinv = numpy.linalg.pinv(J, 0.01) #CALCULATE qdot # multiply qdot = pinvJ vee delta_q = numpy.dot(Jpinv, delta_x) #ADD delta_q TO qcurr qcurr += delta_q i += 1 #SCALE qdot TO LIMIT TO 1rad/s if qcurr.any > 1: qcurr = numpy.divide(qcurr, numpy.linalg.norm(qcurr)) self.joint_command_msg.position = qcurr #TRANSMIT qcurr self.joint_command_pub.publish(self.joint_command_msg) #-------------------------------------------------------------------------- self.mutex.release()
Part 4: Path Planning using RRT
To make path planning tractable, it is better to use existing libraries for efficiency. The MoveIt library is used here, and can be installed manually.
sudo apt-get install ros-kinetic-moveit
Specifically for Ubuntu 16.04, some upgrades must be installed.
sudo apt-get update sudo apt-get install python-pip sudo -H pip install pyassimp --upgrade
The actual implementation of this activity requires many practical considerations in addition to the algorithmic process.
The code ended up utilizing eight different functions to achieve the path planning. The most fundamental to the process was pose_dist and close_enough for evaluating if we were close enough to the goal, segment and is_segment_valid which are both used to evaluate nodes that are added with random sampling and to shorten the paths after a valid path is found, and finally a closest_index and shorten_path function which are used to improve arm movement efficiency.
#FUNCTION TO MAKE MATRIX FROM A tf FRAME OBJECT def transform_to_matrix(tf_frame): # pull out rotation (quaternion) and make into x_rot x_rot_q = [tf_frame.rotation.x, tf_frame.rotation.y, tf_frame.rotation.z, tf_frame.rotation.w] x_rot = tf.transformations.quaternion_matrix(x_rot_q) # pull out translation x_trans = tf.transformations.translation_matrix([tf_frame.translation.x, tf_frame.translation.y, tf_frame.translation.z]) # get final T from (x_trans * x_rot) mat_x_target = numpy.dot(x_trans, x_rot) return mat_x_target #FUNCTION FOR DISTANCE BETWEEN TWO POSES def pose_dist(q_1,q_2): # norm of vector between points (always positive) dist = numpy.linalg.norm(numpy.subtract(q_2,q_1)) return dist #FUNCTION FOR CLOSE ENOUGH TO END def close_enough(q_1, q_2): # set tolerance tol = 0.0001 # check each q value is within tolerance. If so, return True. if pose_dist(q_1, q_2) < tol: return True # if the distance is larger, return False return False #FUNCTION TO SEGMENT A PATH INTO MOVEMENTS OF SIZE tol DOES NOT INCLUDE START IN OUTPUT ARRAY; OUTPUT IS q, NOT OBJECT def segment(q_start,q_end,segment_size, state_bool): # calculate distance between positions distance_start_end = pose_dist(q_end, q_start) print ("Total Distance was" + str(distance_start_end) +str(state_bool)) # check if we are already closer than one segment if distance_start_end < segment_size: return [] # calculate how many segments we need num_segments = math.ceil(numpy.abs(distance_start_end/segment_size)) print ("cut into " + str(num_segments)) # calculate the "step" delta_q for each joint delta_q_sizes = [] for i in range(0, len(q_start)): delta_q_sizes.append((q_end[i]-q_start[i])/num_segments) # starting at the pose after the start, build an array with the pose incremented by delta_q for each joint segment_array = [] prev_q = q_start # only evaluate validity without storing if state_bool: for i in range(0, int(num_segments)): new_q = numpy.add(prev_q, numpy.asarray(delta_q_sizes)) if not self.is_state_valid(new_q): print ("failed at " + str(i)) return False prev_q = new_q return True # return array of intermediate states else: for i in range(0, int(num_segments)): new_q = numpy.add(prev_q, numpy.asarray(delta_q_sizes)) segment_array.append(new_q) prev_q = new_q return segment_array #FUNCTION TO CHECK IF SEGMENT IS VALID def is_segment_valid(q_start, q_end): # set tolerance tol = 0.01 # is start and end valid? if not self.is_state_valid(q_start): return "NOT A VALID START CONFIGURATION" if not self.is_state_valid(q_end): return "NOT A VALID END CONFIGURATION" # is the segment shorter than the tolerance if pose_dist(q_start, q_end) < tol: return True # otherwise, check the points through segment function return segment(q_start, q_end, tol, True) # otherwise, get all the segments in between #in_between_list = segment(q_start,q_end, tol) # loop through until you find a failure and return false #for q in in_between_list: #if not self.is_state_valid(q): #print "tested a point" #print "FALSE!" # if not self.is_state_valid(q): #return False # otherwise, return true #return True #FUNCTION FOR FINDING INDEX OF CLOSEST POINT/BRANCH IN TREE TO RANDOM q_new. DOES NOT TOLERATE EMPTY ARRAY. def closest_index(q_new,q_obj_array): # initialize the smallest distance and index smallest_dist = q_obj_array[0].q smallest_index = 0 # loop through to check if there is a closer pose in the array for i in range(0,len(q_obj_array)): if pose_dist(q_obj_array[i].q,q_new) < pose_dist(smallest_dist,q_new): smallest_dist = q_obj_array[i].q smallest_index = i # return index of closest pose in array return smallest_index #FUNCTION TO GET VECTOR OF LENGTH length BETWEEN tree_point AND q_new def get_unit_vector(q_tree, q_new): # length and scaling factor for norm length = 0.1 scale = 1/length # calculate norm norm = pose_dist(q_tree, q_new) # scale vector unit_vector = numpy.divide(numpy.subtract(q_new,q_tree), scale*norm) print ("length of branch was " +str(numpy.linalg.norm(unit_vector))) return unit_vector #FUNCTION TO OUTPUT ARRAY OF q SHORTEST PATH def shorten_path(q_array): # initialize segment_start = q_array[0] i_start = 0 # start new q array short_q_array = [segment_start] # find farthest viable point on tree for i in range(0,len(q_array)): # check if we're at the end. If so, add it if i == len(q_array)-1: short_q_array.append(q_array[i]) # check if the path from start to end isn't valid. If so, add the one before it elif not is_segment_valid(segment_start, q_array[i]): segment_start = q_array[i-1] i_start = i-1 short_q_array.append(segment_start) else: pass return short_q_array
The rest of this adventure is a series of “making it easier” type tasks such as, was a valid command given? Are we already there? Can we go straight there? Have we added enough points such that at least one part of the tree can go to the goal unobstructed?
#INITIALIZE AND SET KNOWNS # init message and points fin_traj = JointTrajectory() fin_traj.points = [] # set .joint_names fin_traj.joint_names = self.joint_names # starting joint position q_c = self.q_current #CALCULATE TRANSFORM MATRIX b_T_ee_goal T_goal AND q_goal T_goal = transform_to_matrix(ee_goal) # set q_goal q_goal = numpy.asarray(self.IK(T_goal)) #CHECK IF WE ARE ASKED FOR A VALID GOAL if q_goal == []: print "ee_goal is not valid" stop if not self.is_state_valid(q_goal): print "ee_goal is not valid" stop #CHECK IF WE'RE ALREADY THERE if close_enough(q_c,q_goal): print "Goal is same as start point" stop #START short_path_tree WITH CURRENT POSITION segmented_path_tree = [q_c] #CAN WE GO STRAIGHT THERE? IF SO, SKIP TO SEGMENT if is_segment_valid(q_c, q_goal): print "GOING STRAIGHT THERE" q_between = segment(q_c,q_goal, 0.5, False) #includes q_goal segmented_path_tree.extend(q_between) else: #BUILD RANDOM TREE # initialize rand_tree rand_tree = [RRTBranch([], q_c)] q_curr = q_c branch_count = 1 # open while loop until we reach goal configuration while pose_dist(q_curr, q_goal) > 3 or not is_segment_valid(q_curr, q_goal): q_curr = -math.pi + numpy.dot(2*math.pi, numpy.random.rand(self.num_joints)) # find index of nearest entry i_close = closest_index(q_curr, rand_tree) delt_q = get_unit_vector(rand_tree[i_close].q, q_curr) print ("final length branch " +str(numpy.linalg.norm(delt_q))) print delt_q q_curr = q_curr + delt_q # check if obstacles in segment. if is_segment_valid(rand_tree[i_close].q, q_curr): branch_count += 1 # append the object to the rand_tree rand_tree.append(RRTBranch(rand_tree[i_close], q_curr)) print ("Added branch "+ str(branch_count)) else: q_curr = q_c #add on path to q_goal rand_tree.append(RRTBranch(rand_tree[len(rand_tree)-1], q_goal)) # hold on to last object for use in path_tree last_RRTBranch = rand_tree[len(rand_tree)-1] #BUILD SUCCESSFUL PATH TREE path_tree = [q_goal] # fill in while we aren't at the [] parent while last_RRTBranch != []: # append last branch onto front of tree path_tree[0:0] = [last_RRTBranch.q] #assign parent to current branch last_RRTBranch = last_RRTBranch.parent print ("Path Tree" + str(len(path_tree))) #MAKE NEW LIST WITH SHORTENED PATHS TO MAKE THEM EFFICIENT short_path_tree = shorten_path(path_tree) print ("Short Path Tree" + str(len(short_path_tree))) #SEGMENT NEW LIST TO MAKE IT RUNNABLE # init segmented_path_tree = [q_c] # add segment for each link in short path tree into segmented path tree for i in range(0, len(short_path_tree)-1): q_between = segment(short_path_tree[i], short_path_tree[i+1], 0.5, False) segmented_path_tree.extend(q_between) #ASSIGN TO JointTrajectory TYPE # .points.positions start and end for q in segmented_path_tree: if (numpy.asarray(q) == numpy.asarray(q_c)).all(): pass new_point = JointTrajectoryPoint() new_point.positions = q fin_traj.points.append(new_point) #BROADCAST self.pub.publish(fin_traj)
Part 5: Extended Kalman Filtering for State Estimation using Sensors
This assignment does not include RViz or the Kuka and UR5 3D models we are used to. Instead, a very cute little turtle robot walks around trying to estimate where it is in life using sensor data and various numbers of star landmarks.
The basic dynamic model for the robot is:
x(k+1) = x(k) + t * vel_trans * cos(theta) y(k+1) = y(k) + t * vel_trans * sin(theta) theta(k+1) = theta(k) + t * vel_ang
While the sensor model for the robot is:
range = math.sqrt( (xr-xl)*(xr-xl) + (yr-yl)*(yr-yl) )
bearing = math.atan2(yl-yr, xl-xr) - thetar
These models are then incorporated into the extended Kalman Filter which has the following definitions (where the parametric x and y equations will require separate derivatives).
This is in addition to the standard Kalman Filter Equations
The code is shown below.
#FUNCTION FOR BUILDING F FROM com_trans AND theta def F_mat(com_trans, theta): F = numpy.identity(3) #derivative: t*vel_trans*-sin(theta) F[0,2] = -1*t*com_trans*numpy.sin(theta) #derivative: t*vel_trans*cos(theta) F[1,2] = t*com_trans*numpy.cos(theta) return F #FUNCTION FOR BUILDING P_hat FROM F, P, and V def P_hat_mat(F, P, V): P_hat = numpy.dot(numpy.dot(F, P), numpy.transpose(F)) + V return P_hat #FUNCTION FOR BUILDING H FROM x, y, A AND B #use derivatives to find H where x = xr, y=yr, A=xl, B=yl #range_ = math.sqrt( (xr-xl)*(xr-xl) + (yr-yl)*(yr-yl)) --> d/dx of sqrt((x-A)*(x-A) + (y-B)*(y-B)) --> (x-A)/sqrt((x-A)*(x-A)+(y-B)*(y-B)), (y-B)/sqrt((x-A)*(x-A)+(y-B)*(y-B)) #bearing = math.atan2(yl-yr, xl-xr) - thetar --> (y-B)/((A-x)^2+(B-y)^2), (y-B)/((A-x)^2+(B-y)^2), -1 #USE THIS MULTIPLE TIMES BASED ON NUM LANDMARK def H_mat_alt(x, y, num_lm, lm_list): # init matrix H = numpy.zeros((2*num_lm,3)) i = 0 for landmark in lm_list: A = landmark.landmark.x B = landmark.landmark.y #make (x-A) and (y-B) C = x - A D = y - B #make (A-x) and (B-y) E = A - x F = B - y if ((math.pow(C,2)+math.pow(D, 2))) < 0.01: print "SINGULAR" if ((math.pow(E,2)+math.pow(F, 2))) < 0.01: print "SINGULAR 2" #d/dx range (x-A)/sqrt((x-A)*(x-A)+(y-B)*(y-B)) --> C/sqrt(C^2+D^2) H[i,0] = float(C)/(math.sqrt(math.pow(C,2)+math.pow(D, 2))) #d/dy range (y-B)/sqrt((x-A)*(x-A)+(y-B)*(y-B)) --> D/sqrt(C^2+D^2) H[i,1] = float(D)/(math.sqrt(math.pow(C,2)+math.pow(D, 2))) #d/dx bearing -(y-B)/((A-x)^2+(B-y)^2) --> F/(E^2+F^2) H[i+1,0] = float(F)/((math.pow(E,2)+math.pow(F, 2))) #d/dy bearing -(x-A)/((A-x)^2+(B-y)^2) --> E/(E^2+F^2) H[i+1,1] = float(C)/((math.pow(E,2)+math.pow(F, 2))) #d/dtheta bearing -1 H[i+1,2] = -1 i+=2 return H #FUNCTION FOR BUILDING NU FROM y (num_lm, sens.readings), h AND x_hat # NU = y - H*x_hat def NU_vec(num_lm, lm_list, x_hat): y = numpy.zeros((2*num_lm,1)) i = 0 for landmark in lm_list: y[i] = landmark.range y[i+1] = landmark.bearing i += 2 #Hx = numpy.dot(H,x_hat) hx = numpy.zeros((2*num_lm, 1)) i = 0 for landmark in lm_list: xl = landmark.landmark.x yl = landmark.landmark.y hx[i] = math.sqrt((x_hat[0]-xl)*(x_hat[0]-xl) + (x_hat[1]-yl)*(x_hat[1]-yl) )#FILL IN WITH x_hat AND EQUATIONS hx[i+1] = math.atan2(yl-x_hat[1], xl-x_hat[0]) - x_hat[2] i += 2 NU_vec = y - hx return NU_vec #FUNCTION FOR BUILDING W FROM num_lm WITH ASSUMPTION RANGE COV 0.1 AND BEARING COV 0.05 def W_mat(num_lm): range_cov = 0.1 bearing_cov = 0.05 W = numpy.zeros((2*num_lm, 2*num_lm)) for i in range(0, num_lm): W[2*i,2*i] = range_cov W[2*i+1,2*i+1] = bearing_cov return W #FUNCTION FOR BUILDING S FROM num_lm, H, P_hat, AND W # S = H*P_hat*H^T + W def S_mat(num_lm, H, P_hat, W): S = numpy.zeros((2*num_lm,2*num_lm)) S = numpy.dot(numpy.dot(H, P_hat),numpy.transpose(H)) + W return S #FUNCTION FOR BUILDING R FROM num_lm, P_hat, H, AND S # R = P_hat*H^T*S^-1 def R_mat(num_lm, P_hat, H, S): R = numpy.dot(numpy.dot(P_hat, numpy.transpose(H)), numpy.linalg.inv(S)) #GUARUNTEED TO BE INVERTIBLE? return R #FUNCTION FOR UPDATING x WITH x_hat, R and NU # should be 3 x 1 def x_vec_update(x_hat, R, NU): x_upd = x_hat + numpy.dot(R, NU) return x_upd #FUNCTION FOR UPDATING P WITH P_hat, R, H # P = P_hat - R*H*P_hat def P_mat_update(P_hat, R, H): P_update = P_hat - numpy.dot(numpy.dot(R,H),P_hat) return P_update #BUILD V (does not change) V = self.V #SET ALL KNOWNS FROM CALLBACK t = self.step_size x = self.x P = self.P com_trans = sens.vel_trans com_ang = sens.vel_ang num_lm = len(sens.readings) lm_list = sens.readings #MAKE PREDICTION FROM KNOWNS # calculate estimate of next state using existing model. Tolerance for dist from robot to landmark (must be greater than 0.1) #x(k+1) = x(k) + t * vel_trans * cos(theta) #y(k+1) = y(k) + t * vel_trans * sin(theta) #theta(k+1) = theta(k) + t * vel_ang x_hat_x = x[0] + t*com_trans*numpy.cos(x[2]) x_hat_y = x[1] + t*com_trans*numpy.sin(x[2]) x_hat_theta = x[2] + t*com_ang x_hat = numpy.zeros((3,1)) x_hat[0,0] = x_hat_x x_hat[1,0] = x_hat_y x_hat[2,0] = x_hat_theta #BUILD JACOBIAN F # the jacobian of x_hat should be 3x3 F = F_mat(com_trans, x[2]) #BUILD P_hat should be 3x3 P_hat = P_hat_mat(F, P, V) #REMOVE LANDMARKS CLOSER THAN 0.1 l_rem = 0 i = 0 for l in lm_list: if math.sqrt((math.pow(l.landmark.x-x_hat[0],2)+math.pow(l.landmark.y-x_hat[1], 2))) < 0.1: del lm_list[i] l_rem += 1 i+=1 num_lm = num_lm - l_rem R = 0 NU = 0 P_upd = P_hat x_upd = x_hat #CHECK IF LANDMARKS if num_lm > 0: #BUILD JACOBIAN H H = H_mat_alt(x_hat[0], x_hat[1], num_lm, lm_list) #BUILD NU #Thus, the innovation can be computed as nu = y(k+1) - h ( x^ (k+1) ), where x^ stands for "x hat", or the predicted state. Should never exceed pi in magnitude - use wraparound i.e. -270 = 90 NU = NU_vec(num_lm, lm_list, x_hat) # be sure to check that bearing does not exceed +/- pi for i in range(0, num_lm): while NU[i*2+1,0] < -1*math.pi: #print "changed the bearing +2pi" NU[i*2+1,0] = NU[i*2+1,0] + 2*math.pi while NU[i*2+1,0] > 1*math.pi: #print "changed the bearing -2pi" NU[i*2+1,0] = NU[i*2+1,0] - 2*math.pi #BUILD W W = W_mat(num_lm) #BUILD S S = S_mat(num_lm, H, P_hat, W) #BUILD R R = R_mat(num_lm, P_hat, H, S) #BUILD x_upd x_upd = x_vec_update(x_hat, R, NU) #BUILD P_upd P_upd = P_mat_update(P_hat, R, H) #ASSIGN NEW P AND x #update self.P to self self.P = P_upd #update self.x to be broadcast self.x = x_upd