People Tracking using a Depth Camera

Standard

People tracking using blob.

 

Introduction

Depth cameras are increasingly popular in building automation, occupancy management, security and access control, enabled by low-cost depth sensors like the OPT8241 and OPT8320 3D Time-of-Flight chipsets from Texas Instruments.  A key benefit depth cameras is the ability to use depth to segregate foreground from the background.  Once foreground objects are isolated, they can be recognized, tracked and counted using modern image processing algorithms available in OpenCV.  In this post, I will describe how to use the OPT8241-CDK-EVM depth camera, BSD-licensed Voxel SDK, and OpenCV to create a simple people counter and tracking application.

The general strategy of people counting and tracking is as follow:

  1. Foreground-Background Separation
  2. Convert to Binary Image and Apply Morphology Filters
  3. Shape Analysis
  4. Tracking

Foreground-Background Separation

Foreground-background separation starts with registering the background, which is necessary before one can separate foreground from background through image subtraction–if depth camera is used, image substraction would be between two depth images.  Setting the background could be as simple as capturing a frame when the scene is absent of foreground objects.  But simple approach also means background objects that may have subsequently moved will be detected, though noticing the initial change would be interesting to some applications.   A more sophisticated approach would be to slowly fade in any alteration back into the background, if the alteration is not from objects being tracked, and the alteration is no longer changing.  The first would require recognition; the latter is determined if there is sustaind period of no-change in the altered areas in image.  If the sophisticated approach is adopted, the defnition of foreground is the fast-changing component of the scene, and background is the slow-changing component.  The rate at which the foreground fades into the background should be a programmable parameter that depends on the type of applications.  After subtraction, the result would be from newly present or absence of objects.  To reduce the impact of camera noise, the “foreground” may need to be further qualified by minimum delta depth (“thickness”) and minimum amplitude (“brightness”).

The code example below illustrates a simple case of foreground-background separation:

void Horus::clipBackground(Mat &dMat, Mat &iMat, float dThr, float iThr)
{
   for (int i = 0; i < dMat.rows; i++) {
      for (int j = 0; j < dMat.cols; j++) {
         float val = (iMat.at(i,j) > iThr && dMat.at(i,j) > dThr) ? 255.0 : 0.0;
         dMat.at(i,j) = val;
      }
   }
}

where iThr is the intensity threshold, and dThr is the depth threshold.

Binary Image and Morphology Filter

The foreground from subtraction may contain speckles to noise, as noise varies from frame to frame.  The morphology operators can be applied to remove speckles and fill in small gaps.  The open operator first erodes the image using the chosen morphology element, then dilates the result to fill in the gaps and smooth the edges.  The OpenCV example is given below, where the image on the left is the original image, and image on the right is the result after applying the open operator.  Note the small hole and gaps are filled in.

Shape Analysis

After the foreground is isolated as a binary image, shape analysis can be performed to find individual objects in the foreground.  This step is where people counting solutions vary–algorithms that differentiate people from objects with high accuracy are considered superior than those that do not.   People tracking algorithms depend heavily on camera angles.  Algorithms for ceiling-mounted camera are generally more simple than those for corner-mounted cameras because from the ceiling “people” look like well-formed blobs; but from the corner, “people” become complex overlapping silhouettes which harder to separate.  Several relevant shape analysis algorithms useful in people tracking and coutning are described below.  Most of them are available in OpenCV.

Blob Anlaysis

Blob analysis works by connecting joined, self-enclosing regions in the foreground sharing common properties such as area, thresholds, circularity, inertia and convexity. Proper selection of these properties can greatly enhance accuracy.  A great summary article on blob analysis with example code is available from Satya Mallick.

BlobTest

Blob analysis works best when the camera is ceiling mounted, because people will generally look like well-formed blobs from that camera angle.  However, people in physical contact with one another can cause their blobs to join, leading to miscount.  The erode operator is useful in this case, as it can split thinly connected blobs.   Even though blob analysis is a natural fit for ceiling-mounted cameras, it can be appllied to corner-mounted cameras if the overlapping issue can be resolved.  One way to deal with overlap is to “slice” the observed volume along the camera’s z-axis and perform blob analysis one “slice” at a time.

Contour Analysis

Foreground shapes can also be recognized and tracked by contours, a list of points that form a self-enclosing outline of the foreground object it encloses. A contour has a length and an enclosed area.   A point in the image can be inside or outside a contour; and a contour can be nested inside another; but contours do not cross path.  Contours can be compared for similarity.  With proper setting of this set of properties to reflect those of a “person”, the number of contours in the foreground becomes a people counter.

A key benefit of contours is the ability to identify appendages, or body parts, such as fingers, legs, arm, shoulder.  This ability is available through contour operators like convex hull and convexity defects.  In the example below, convex hull is the vertices of the green convex polygon; and convexity defects are the red points at the bottom of “valleys”.  The “valleys” are called convexity defects because they represent violations of convexity. Once convex hull and convexity defects are identified, together with the contour centroid and some heuristics, they identify head, arms, legs of a person.

defects

Region Growing

For corner-mounted cameras, people in the foreground may overlap, especially in a crowded room.  The point cloud of the foreground pixels should be exploited to group points belonging to the same individual. The region growing algorithm can be applied to group pixels having similar z \cos(\theta) distance from the camera, where \theta is the camera pitch angle.

The first step is finding suitable seeding points.  One way is to histogram each foreground blob and identify the top 2-3 local maxima, but the maxima must meet some mininum separation requirement.  Then seed the point in each maxima that is closest to the centroid of all points belonging to the same maxima.  To grow the region, set each seed as the center, then scan the 8 neighbors to qualify or disquality them into the group based on the z \cos(\theta) distance.  Then with each qualified neighbor as the center, repeat the same 8-neighbor scan to expand the group.  The result of an example is given in the figure below.

Region growing algo

Region Growing Algorithm in People Counting [1].

Tracking

In some applications, tracking the movement of people in a room is important.  For example: monitoring presence of suspicious or unusual activities, or quantifying the interest of a crowd to particular products or showcases.  Tracking also enables one to maintain proper head count in situations where people may be partially or even fully occluded.  In these scenarios, if tracker has not detected any “people” leaving the scene from the sides of the camera view, then any disappearing blobs must be due to occlusion, therefore head count must remain unchanged.  Tracking requires matching foreground entities in consecutive frames.  The matching can be based multiple criteria, such as shortest centroid displacement and similarity of contour shape and intensity profile.  Subtraction of consecutive frames will also give excellent indication of direction of motion, enabling prediction of where in the new frame the tracked object is.

Simple Code Example

The code snippet below illustrate people tracking and counting that I described above.  The #if  macro comments selects between blob tracking and contour tracking.

void Horus::update(Frame *frame)
{
   vector< vector > contours;
   vector hierarchy;
   RNG rng(12345);

   if (getFrameType() == DepthCamera::FRAME_XYZI_POINT_CLOUD_FRAME) {

      // Create amplitude and depth Mat
      vector zMap, iMap;
      XYZIPointCloudFrame *frm = dynamic_cast(frame);
      for (int i=0; i< frm->points.size(); i++) {
         zMap.push_back(frm->points[i].z);
         iMap.push_back(frm->points[i].i);
      }
      _iMat = Mat(getDim().height, getDim().width, CV_32FC1, iMap.data());
      _dMat = Mat(getDim().height, getDim().width, CV_32FC1, zMap.data()); 

      // Apply amplitude gain
      _iMat = (float)_ampGain*_iMat;

      // Update background as required
      if (!_setBackground) {
         _dMat.copyTo(_bkgndMat);
         _setBackground = true;
         cout << endl << "Updated background" << endl;
      }

      // Find foreground by subtraction and convert to binary 
      // image based on amplitude and depth thresholds
      Mat fMat = clipBackground((float)_depthThresh/100.0, (float)_ampThresh/100.0);

      // Apply morphological open to clean up image
      fMat.convertTo(_bMat, CV_8U, 255.0);
      Mat morphMat = _bMat.clone();
      Mat element = getStructuringElement( 0, Size(5,5), cv::Point(1,1) );
      morphologyEx(_bMat, morphMat, 2, element);

      // Draw contours that meet a "person" requirement
      Mat drawing = Mat::zeros( _iMat.size(), CV_8UC3 );
      Mat im_with_keypoints = Mat::zeros( _iMat.size(), CV_8UC3 );
      cvtColor(_iMat, drawing, CV_GRAY2RGB);

      int peopleCount = 0;

#if 1
      // Find all contours
      findContours(morphMat, contours, hierarchy, CV_RETR_TREE, CV_CHAIN_APPROX_SIMPLE, cv::Point(0,0));
      for ( int i = 0; i < contours.size(); i++ ) { 
         if (isPerson(contours[i], _dMat)) {  
            peopleCount++;
            drawContours( drawing, contours, i, Scalar(0, 0, 255), 2, 8, vector(), 0, cv::Point() ); 
         }
      }
#else
      // Find blobs
      std::vector keypoints;
      SimpleBlobDetector::Params params;


      // Filter by color
      params.filterByColor = true;
      params.blobColor = 255;

      // Change thresholds - depth
      params.minThreshold = 0;
      params.maxThreshold = 1000;

      // Filter by Area.
      params.filterByArea = true;
      params.minArea = 100;
      params.maxArea = 100000;

      // Filter by Circularity
      params.filterByCircularity = false;
      params.minCircularity = 0.1;
 
      // Filter by Convexity
      params.filterByConvexity = false;
      params.minConvexity = 0.87;
 
      // Filter by Inertia
      params.filterByInertia = false;
      params.minInertiaRatio = 0.01;


      cv::Ptr detector = cv::SimpleBlobDetector::create(params); 
      detector->detect( morphMat, keypoints );

      cout << "Keypoints # " << keypoints.size() << endl;

      for ( int i = 0; i < keypoints.size(); i++ ) { 
	 cv::circle( drawing, cv::Point(keypoints[i].pt.x, keypoints[i].pt.y), 10, Scalar(0,0,255), 4 );
      }
      peopleCount = keypoints.size();
#endif

      putText(drawing, "Count = "+to_string(peopleCount), cv::Point(200, 50), FONT_HERSHEY_PLAIN, 1, Scalar(255, 255, 255));

      imshow("Binary", _bMat);
      imshow("Amplitude", _iMat); 
      imshow("Draw", drawing);
      imshow("Morph", morphMat);
   }
}

Below is a video of the people tracking using contour:

References

  1. Method For Segmentation Of Articulated Structures Using Depth Images for Public Displays

Installing Ubuntu 14.04 on Pandaboard

Standard

Introduction

Deployment of ROS Indigo requires all Ubuntu platforms tied to ROS Indigo run Ubuntu 14.04.  The PandaBoard is one of the platforms, so upgrading it from Ubuntu 12.04 to 14.04 is a must.   However, at the time of this writing, we weren’t able to find a pre-built image for PandaBoard, so we followed a relatively complex but well documented procedure published by Robert Nelson.

The procedure basically has these major parts:

  • Download the build tools
  • Download and build the boot-loader (U-Boot)
  • Download and build the Linux Kernel
  • Download the Root file system (Ubuntu 14.04.1)
  • Setup the SD Card
  • Install Kernel and Root File System on the SD Card
  • Setup Networking and Serial I/O
  • Sync and Unmount the SD Card

Once these steps are done, boot the Pandaboard from the SD card, and then perform some additional sudo apt-get install steps to upgrade to the LXDE desktop.

If you prefer just get a pre-built image, download the image from here, and insert a 8 GB SD card, make sure it is unmounted, and apply the following command:

tar zxvf ubuntu_14.04_lxde_panda.img.tar.gz
sudo dd if=ubuntu_14.04_lxde_panda.img of=/dev/mmcblk0 bs=1M

If you prefer doing it the hard way, read on…

Download the Build Tools

Install the 32-bit library prerequisites:

sudo apt-get install libc6:i386 libstdc++6:i386 libncurses5:i386 zlib1g:i386
sudo apt-get install build-essential

Now download the build tools:

wget -c https://releases.linaro.org/14.09/components/toolchain/binaries/gcc-linaro-arm-linux-gnueabihf-4.9-2014.09_linux.tar.xz
tar xf gcc-linaro-arm-linux-gnueabihf-4.9-2014.09_linux.tar.xz
export CC=`pwd`/gcc-linaro-arm-linux-gnueabihf-4.9-2014.09_linux/bin/arm-linux-gnueabihf-

Now test to ensure installation is done properly:

${CC}gcc --version
arm-linux-gnueabihf-gcc (crosstool-NG linaro-1.13.1-4.9-2014.09 - Linaro GCC 4.9-2014.09) 4.9.2 20140904 (prerelease)
Copyright (C) 2014 Free Software Foundation, Inc.
This is free software; see the source for copying conditions.  There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

Download and Build the Boot-Loader (U-Boot)

Download the boot-loader

git clone git://git.denx.de/u-boot.git
cd u-boot/
git checkout v2015.01 -b tmp

Now apply the required patches:

wget -c https://raw.githubusercontent.com/eewiki/u-boot-patches/master/v2015.01/0001-omap4_common-uEnv.txt-bootz-n-fixes.patch
patch -p1 < 0001-omap4_common-uEnv.txt-bootz-n-fixes.patch

Build U-Boot:

make ARCH=arm CROSS_COMPILE=${CC} distclean
make ARCH=arm CROSS_COMPILE=${CC} omap4_panda_defconfig
make ARCH=arm CROSS_COMPILE=${CC}

 Download and Build Linux Kernel

First download it:

git clone https://github.com/RobertCNelson/armv7-multiplatform.git
cd armv7-multiplatform

Now check out the proper branch:

git checkout origin/v3.18.x -b tmp

Finally, build it with the supplied script:

./build_kernel.sh

When the build completes successfully, take note of the version number on the screen; it looked something like this:

-----------------------------
Script Complete
eewiki.net: [user@localhost:~$ export kernel_version=3.X.Y-Z]
-----------------------------

Download the Root File System (Ubuntu 14.04.1)

Robert Nelson provide several types of file system, including Debian 7, Debian 8, Ubuntu 14.04.1, and a Debian 7 small flash file system.  For our purpose, we need to install the Ubuntu 14.04.1 file system.

First, download:

wget -c https://rcn-ee.net/rootfs/eewiki/minfs/ubuntu-14.04.1-minimal-armhf-2015-01-20.tar.xz

Then verify:

md5sum ubuntu-14.04.1-minimal-armhf-2015-01-20.tar.xz
fc71da62babe15e45c7e51f8dba22639  ubuntu-14.04.1-minimal-armhf-2015-01-20.tar.xz

and finally extract the file system:

tar xf ubuntu-14.04.1-minimal-armhf-2015-01-20.tar.xz

The extracted files will be copied to the SD card later.

Setup SD Card

Typically a 4 GB SD card will work well. Plug in the SD card into your system. Make sure all partitions on the card are unmounted. Find out the device name, it should be either /dev/sdx or /dev/mmcblk0. You can use the lsblk command to find out (under the /sr0 section):

NAME        MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda           8:0    0 232.9G  0 disk 
├─sda1        8:1    0   487M  0 part /boot/efi
├─sda2        8:2    0 200.7G  0 part /
└─sda3        8:3    0  31.8G  0 part [SWAP]
sdb           8:16   0 931.5G  0 disk 
└─sdb1        8:17   0 931.5G  0 part 
sdc           8:32   0 238.5G  0 disk 
├─sdc1        8:33   0     1G  0 part 
├─sdc2        8:34   0   260M  0 part 
├─sdc3        8:35   0   128M  0 part 
├─sdc4        8:36   0 223.3G  0 part 
└─sdc5        8:37   0  13.9G  0 part 
sr0          11:0    1  1024M  0 rom  
mmcblk0     179:0    0   3.7G  0 disk 
├─mmcblk0p1 179:1    0  25.5K  0 part 
└─mmcblk0p2 179:2    0   3.7G  0 part 

In this case, the SD card is located at mmcblk0, and it has two partitions.
Now we will define and export a macro:

export DISK=/dev/mmcblk0

and erase the SD card:

sudo dd if=/dev/zero of=${DISK} bs=1M count=10

Now we install the boot-loader:

sudo dd if=./u-boot/MLO of=${DISK} count=1 seek=1 conv=notrunc bs=128k
sudo dd if=./u-boot/u-boot.img of=${DISK} count=2 seek=1 conv=notrunc bs=384k

Next, create the partitions:

sudo sfdisk --in-order --Linux --unit M ${DISK} <<-__EOF__
1,,0x83,*
__EOF__

And format:

#for: DISK=/dev/mmcblk0
sudo mkfs.ext4 ${DISK}p1 -L rootfs
 
#for: DISK=/dev/sdX
sudo mkfs.ext4 ${DISK}1 -L rootfs

Now we mount the partitions (on some systems, auto-mount happens when the SD card is inserted).

sudo mkdir -p /media/rootfs/
 
#for: DISK=/dev/mmcblk0
sudo mount ${DISK}p1 /media/rootfs/
 
#for: DISK=/dev/sdX
sudo mount ${DISK}1 /media/rootfs/

Install Kernel and Root File System on SD Card

Here you will need the kernel version number saved after the kernel build.   Use it below by substituting “3.X.Y-Z” with the kernel version number you wrote down.

export kernel_version=3.X.Y-Z

Now, copy the file system to the SD card:

sudo tar xfvp ./*-*-*-armhf-*/armhf-rootfs-*.tar -C /media/rootfs/

and create the /boot/nEnv.txt:

sudo sh -c "echo 'uname_r=${kernel_version}' > /media/rootfs/boot/uEnv.txt"

Now, setup the device tree binary, depending on which version of Pandaboard you have:

PandaBoard EA1->A3
sudo sh -c "echo 'dtb=omap4-panda.dtb' >> /media/rootfs/boot/uEnv.txt"
 
PandaBoard A4->+ (non ES)
sudo sh -c "echo 'dtb=omap4-panda-a4.dtb' >> /media/rootfs/boot/uEnv.txt"
 
PandaBoard ES
sudo sh -c "echo 'dtb=omap4-panda-es.dtb' >> /media/rootfs/boot/uEnv.txt"
 
PandaBoard ES Rev B3
sudo sh -c "echo 'dtb=omap4-panda-es-b3.dtb' >> /media/rootfs/boot/uEnv.txt"

Now copy the kernel files, device tree binaries, and kernel modules:

sudo cp -v ./armv7-multiplatform/deploy/${kernel_version}.zImage /media/rootfs/boot/vmlinuz-${kernel_version}
sudo mkdir -p /media/rootfs/boot/dtbs/${kernel_version}/
sudo tar xfv ./armv7-multiplatform/deploy/${kernel_version}-dtbs.tar.gz -C /media/rootfs/boot/dtbs/${kernel_version}/
sudo tar xfv ./armv7-multiplatform/deploy/${kernel_version}-modules.tar.gz -C /media/rootfs/

Setup the file system table:

sudo sh -c "echo '/dev/mmcblk0p1  /  auto  errors=remount-ro  0  1' >> /media/rootfs/etc/fstab"

Setup Networking and Serial I/O

Edit the file /etc/network/interfaces:

sudo gedit /media/rootfs/etc/network/interfaces

and add the following:

auto lo
iface lo inet loopback
 
auto eth0
iface eth0 inet dhcp

To setup serial login on Ubuntu, we need to create a new file, /etc/init/serial.conf:

sudo gedit /media/rootfs/etc/init/serial.conf

and put the below contents in it:

start on stopped rc RUNLEVEL=[2345]
stop on runlevel [!2345]
 
respawn
exec /sbin/getty 115200 ttyO2

To setup the wifi, we have to first install some firmware:

git clone git://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git
 
sudo mkdir -p /media/rootfs/lib/firmware/ti-connectivity
sudo cp -v ./linux-firmware/ti-connectivity/* /media/rootfs/lib/firmware/ti-connectivity

and then edit the file:

sudo gedit /media/rootfs/etc/network/interfaces

and add the below contents in it:

auto wlan0
iface wlan0 inet dhcp
    wpa-ssid "essid"
    wpa-psk  "password"

Now, sync the file system, unmount and remove the SD card:

sync
sudo umount /media/rootfs

Booting Ubuntu and Install LXDE Desktop

Insert the SD card into Pandaboard and apply power. Connect HDMI to a monitor and you should see a simple console login. Enter username ubuntu and password temppwd.

To install LXDE desktop, perform the following steps:

sudo apt-get update
sudo apt-get -y install lxde-core slim xserver-xorg x11-xserver-utils

Other desktop configurations, such as ubuntu-desktop, do not work and the system locks up. If you’re having slow or disappearing mouse issue, add the following to the /etc/X11/xorg.conf file:

Section "Monitor"
   Identifier "Builtin Default Monitor"
EndSection
Section "Device"
   Identifier "Builtin Default fbdev Device 0"
   Driver "omap"
   Option "HWcursor" "false"
EndSection
Section "Screen"
   Identifier "Builtin Default fbdev Screen 0"
   Device "Builtin Default fbdev Device 0"
   Monitor "Builtin Default Monitor"
   DefaultDepth 16
EndSection
Section "ServerLayout"
   Identifier "Builtin Default Layout"
   Screen "Builtin Default fbdev Screen 0"
EndSection

After you finished, boot from the new SD card image, login and you should see a pretty desktop like the one shown at the top of this blog.

Preparing Your Model for 3D Printing

Standard

After you created or imported a 3D model, some preparation is usually required to make it 3D printer friendly. This blog post describes some of the most important ones.

Translating

After important a model, it is often necessary to translate the model to the origin before working with it.  Take the Darth Vader model below for example, after importing it into Autodesk 123D Design, one will find that it is well above the ground plane.

darth_orig

Darth Vader model right after import.

To fix the problem using the translation and rotation feature and to viewing perspective cube at the top-right to bring Darth Vader down to the ground plane.

vador translate

Using the tranlation/rotation feature and viewing perspective cube (top-right) to bring model down to near the origin.

Darth Vador brough near origin, unscaled.

Darth Vader brough near origin, unscaled.

 Scaling

If you designed the model using the units that you actually wants to use, then this step may not be required.  If you import a model from an external source, often the model needs to be scaled to fit within the printer’s build volume, and your desired size.  Autodesk 123D Design can be used scale the volume.  The Darth Vador above is rather large, as is evident by the fact the ground plane, which has a millimeter grid, is so dense that it becomes a solid plane.   To scale Darth Vador down to size, use the scale tool:

Darth Vader scaled.

Darth Vader scaled.

Now, notice the scaling will introduce more offsets.  To remove the offset, simply use the translation tool to move the scaled model down more.  More often than not, the translation tool and the scaling tools need to be used iteratively to bring out the desired result.  After a few iterations, a 7.5cm wide Darth Vader model is achieved.  Now let’s make sure we save the model as a 123D Design file, but also export is as an STL file.

Darth Vader scaled to 7.5 cm wide.

Darth Vader scaled to 7.5 cm wide.

Water-Tight

Now that we have a good looking model at the right size, the model must be made into a solid before printing, otherwise it is just a group of surfaces, or mesh.  Often the design may appear to be a solid, but if there is any holes in the exterior surface, the model becomes a hollow shell with surfaces that have zero thickness.  3D printer usually cannot print such model.  To ensure the model you designed or imported does not have holes, the model must be made water-tight by the right tool.   A free tool that will check whether an STL file is water-tight is Netfabb Basic.  Autodesk 123D Design provides such support through the 3D Print button:

3D Print button to ensure model is water-tight.

3D Print button to ensure model is water-tight.

The button will bring up another Autodesk tool called Meshmixer, which will check and repair the model to ensure that it is water tight:

Run Meshmixer Repair tool to make Darth Vader watertight.

Run Meshmixer Repair tool to make Darth Vader watertight.

Darth Vader repaired.

Darth Vader repaired.

After the model is repaired, export the model as a STL file.  The Export button is at the bottom left of the Meshmixer window.

Repaired Darth Vader STL file imported into 123D Design.

Repaired Darth Vader STL file imported into 123D Design.

Careful comparison of the repair model and the original model, one will fine the repaired model is fully enclosed, and there is more smooth.  The repair model is now almost suitable for importing into the 3D printing software, such as Simplify3D.

Slicing

Sometimes a model is complex enough that it needs to be sliced into more 3D-printable pieces.  For example, the X-Wing Flighter when sliced like the figure below, will make it much easier to “extrude” or print.  Notice the plane where the slice occurs is where the maximum cross section area.

Spliting complex model to make it more 3D printer friendly.

Spliting complex model to make it more 3D printer friendly.

Adding Support

While slicing a model can improve 3D printing, it requires one glue the pieces together after printing.  Alternatively, one can add support to the model to enable help 3D printer.  Both Simply3D and Meshmixer supports adding support.  But I find Meshmixer produces better support structure that is easlier to break off after printing.  Below is an example of Darth Vader with support structure added.  After the support is added, the model with support can be exported as a STL file, which can then be imported by Simplfy3D.  In this case, the model should be printed without Simply3D adding any additional support.

Darth Vader model with support added, using Meshmixer.

Darth Vader model with support added, using Meshmixer.

Implementing Support Vector Classifier in Python

Standard

plot_iris_001Support Vector Classifier (SVC) is a form of Support Vector Machines (SVM) capable of categorizing inputs under supervised training.  This blog post discusses how to implement SVC using Python using the scikit learn module. We will walk through a simple example below.

To start off, import the necessary support modules, which includes numpy,and SVC from sklearn.svm, and joblib from sklearn.externals.

#!/usr/bin/python
import numpy as np
from sklearn.svm import SVC
from sklearn.externals import joblib

Next we create two arrays, X and y. X is a list of feature vectors, each feature vector a list in itself. In the example below, each feature vector has two elements, and there are 8 sample feature vectors. y is the label vector, each element in the vector is a ‘label’; that is, the label you want the SVC to associate the corresponding input with. In the below example, the feature vector, [-2, -2] will be associated with label ‘d’ (fifth element of y).

# X is a list of feature vector.
X = np.array([[-1, -1], [-2, -1], [1, 1], [2, 1], [-2, -2], [8, 4], [5, 3], [4, 8]])
# y is a list of labels. A label could be integer number or a string
y = np.array(['a', 'a', 'c', 'b', 'd', 'c', 'd', 'e'])

Now, we can create an SVC and train it with X and y:

# clf is the SVM Classifier
clf = SVC()
# Train the SVC
clf.fit(X, y)

To test the network’s ability to generalize, we pick a feature vector that is very close to one of the feature vectors used for training. We do this because we know what the right answer ought to be. For instance, since we know [-2, -2] is associated with label ‘d’, we would expect that [-1.8, -2.1] to also respond with label ‘d’.

# Print the prediction - note [-1.8, -2.1] is very close to [-2, -2],
# which maps to 'd' positionally (5th position), 'd' should be the output
print(clf.predict([-1.8, -2.1]))

Finally, after the SVC is trained, we want the learning to persist by saving the SVC to a file. This is done using joblib:

# save file as binary
joblib.dump(clf, "mysvm_save.pkl", compress=9)

To test the save file, load it into another variable, clf2, and test it:

# reload saved file and run the svm
clf2 = joblib.load("mysvm_save.pkl")
print(clf2.predict([-1.8, -2.1]))
# 'd' should be the output

Now run the code:

$ python sci2.py 
['d']
['d']

Sure enough, the results are as expected.

The Emperor’s New Clothes

Standard

The-Emperors-New-Clothes-851x471The Emperor’s New Clothes is a short tale about a vain emperor obsessed with fancy clothes and how he looks.  One day, two weavers promised to make him a royal robe that cannot be seen by those unfit for their positions.  When the robe was finished, the two weavers pretended to put the robe on the emperor.  No one around the emperor, including the emperor himself, were willing to admit that they cannot see the robe, and went along with the pretense.  One day, in a royal parade, a child in the crowd too young to understand the pretense, cried out, “the emperor is naked!”  With the pretense shattered, the crowd began to cry out the same, setting up for a rather embarrassing ending.

This story may be a children’s fable, but its modern parallels are being played out in the corporate world.  Below are some of the warnings from the story to leaders.

The Emperor

The emperor was obsessed with his appearance, and was unwilling to acknowledge the truth (invisible robe) fearing that others may see him as unfit to lead.  Corporate leaders may encounter similar situations where the truth reflect poorly on them; instead of acknowledging the truth and learning from it, some leaders choose to spin the truth to avoid backlash.  This story is a warning to leaders who spin the truth: people may go along with your pretense, but nobody is fooled.

The Weavers

The cunning weavers, probably for personal reasons, convinced the emperor to do something ill-advised by promising him a robe that will make him special.  In the corporate world, a leader’s desire to be special may lead him to take unwise actions.  Beware of those around you who may manipulate you and situations to get what they want.

The Officials

It’s funny that those close the emperor didn’t said anything to him when he was naked.  The reason for the silence was fear, common in the corporate world.  A high-ranking leader can be so unapproachable that her followers feel that they cannot communicate the truth without repercussion, so they choose silence.  Leaders should foster a culture of openness and integrity, not surround themselves with “Yes” men, but pay attention to those who are silent, as they may have something say.

The Innocent Child

The story climaxed when an innocent child blurted out, “the emperor is naked!”  With all the pretense, this is the first time someone spoke the truth.  Today, the child would be equated to a whistle-blower, someone who had enough of the false pretense and decided to tell it like it is. Whistle blowing is the last thing any high-ranking leader wants to see, as it leads to public scrutiny and potential embarrassment.  To prevent public whistle blowing is to provide channels for followers to identify wrongs without repercussion.  The goal is to catch and solve problems before they get out of control.

The Crowd

After the child cried out, “the emperor is naked!”, the crowd realized the pretense is broken, and began to affirm the truth by shouting out the same.  One can only imagine the mob scene and the humiliation the emperor must have experienced.  The lesson to modern leaders is this: eventually the truth will come out; the longer you wait, the harder your fall.

 

(The above article is solely the expressed opinion of the author and does not necessarily reflect the position of his current and past employers)

 

Math for Team Managers

Standard

When I am asked of my management philosophy, my reply is usually, “It depends.”  I believe the right management philosophy depends on the team composition and circumstances.  Below are some easy to remember formulas:

team = manager x (staff #1 + staff #2 + … + staff #n)

business man and his teamThis is perhaps the classic management model, the manager serves as an “enabler” of the entire team’s effectiveness. Her role is to ensure that the team is operating efficiently by defining good business practices and providing the right tools and trainings, etc.  The manager’s role magnifies the entire team’s effectiveness, like a multiplier.  This type of manager is usually the team’s face to the rest of the company.

team = (staff #1 + staff #2 + … + staff #n) x manager

nelson-mandela-leadershipThis is like the previous model, but the manager keeps a low profile, and encourages the staff to interface directly with the other parts of the company.  Managers who lead from behind like this are often new managers who may have to lean on their staff for the know-how; but when the managers grew more confident, they began to lead from the front.  Managers who are mentors often lead from behind as well. They take pride in creating good pupils, and are happy to put the pupils in the spotlight.  I find managers close to retirement, and those rare, self-less managers, operate this way.

team = STAFF #1 + (staff #2 + … + staff #n) x manager

Lebron-James-Dunk-ContestMany teams have superstars; and these superstars can have big egos.  Confident managers who can keep their own ego under control are best suited at managing the superstars.  These managers work to give the superstars the visibility they need while not letting the egos get out of control.

team = (STAFF #1 x staff #3 + … + STAFF #2 x staff #n) x manager

mentoringOne way the manager could channel superstar ego positively is challenging them to become mentors by pairing them with junior staff.  This way the superstars help grow the team, learn new soft skills, and have their ego satisfied.  This will also help groom future managers.

team = (manager + staff #1 + staff #2 + … + staff #n)

images-2In many startups, the manager acts like one of the staff, and usually has a title of Team Leader to deemphasize the manager roles. This model is usually desired for budget-strapped startup companies where the management activity is minimized and product development activity is maximized.  Everyone has to be “billable”.  For this to work, the manager must not have a big ego, be a control freak, and get hung up on the title.  Monetary reward is usually what unites the team with a single focus.

team = (staff #1 + staff #2 + … + staff #n)

In rare cases a team is so small and tight-knit, the role of a manager is unnecessary.  For this nearly Utopian model to work, members must share the same vision and respect one another; and have the maturity to manage egos and handle disagreement without going through an authority.  I personally only find this model works for teams formed for a specific purpose; and when the purpose is reached, the team disbands.  I would venture to say that this model is rare in the corporate world.

Final Thoughts

A good manager must be perceptive of the team composition and dynamics, and apply the appropriate style of maximize the team’s effectiveness.

(The above article is solely the expressed opinion of the author and does not necessarily reflect the position of his current and past employers)

Facing the “Donald Sterling” in Us

Standard

media_47454_enDonald Sterling’s recent racist remarks stirred up the NBA and evoked the memory of a painful past in U.S. history.  The news media, sports celebrities and even the President have condemned the remarks.  While the source of these remarks have yet to be authenticated, the remarks themselves were clearly discriminatory.

They were also meant to be private.

The fact that the remarks were private is most troubling, because what one thinks and does in private reflects one’s true character.

Imagine having your private thoughts projected on a jumbotron.  What will people see?

I suspect that many of us have private thoughts that we are not proud of.  While we externally project an image of political-correctness, in private, or subconsciously, we may behave the opposite.  There are many reasons why one group of people could develop biases against another, some may even feel justified due to real-life experiences.   But social biases broadly generalized and left unchallenged can lead to racism.

5949544893_multi_racial_xlarge

I do not understand what I do. For what I want to do I do not do, but what I hate I do.  (Rom. 7:15)

You see, racism is not just an issue of White-vs-Black or majority vs. minority, as even minority groups discriminate against one another.  Let me be clear: I am not defending Donald Sterling; far from it.  I am challenging anyone, myself included, who casts stones at Sterling to examine ourselves first.

For in the way you judge, you will be judged; and by your standard of measure, it will be measured to you.  (Matt. 7:2)

None of us are born race-less, and racism can creep into our subconsciousness ever so subtly. The Donald Sterling firestorm should serve as an occasion for us to search within ourselves for any racial biases, however big or small, and do some house cleaning.

Search me, O God, and know my heart; Try me and know my anxious thoughts; And see if there be any hurtful way in me, And lead me in the everlasting way.  (Ps. 139:23-24)

(The above article is solely the expressed opinion of the author and does not necessarily reflect the position of his current and past churches)