Photogrammetry Update - 12/14/15

Flying an "MAV" or, you know, an airplane.

The next couple weeks I'll be in Anchorage finishing up flight training. It has been a ton of fun and the views are pretty spectacular:

Click to expand

Click to expand

While flying and studying take a lot of time, it is needed for K2D to operate legally. However, we are staying busy meeting with potential customers and biz dev folks here in town. We had great meetings with the Anchorage Economic Development Corporation, who are helping us with market research, and continue to be active with the Boardroom where we'll present at 1 Million Cups on this Wednesday. We're also finishing up our customer pitch decks and 1-sheets so that we can be ready for when our FAA approval finally comes through (we expect it at either the end of this month or early January).

Of course, it has started snowing this week cancelling most training flights (including, for the time being, unmanned ones) so I thought I'd dive into one of the main services we'll provide: Photogrammetry. 

Intro to Photogrammetry

It is essentially the art/science which allows us to make 3D models using our drone. It takes the angle of the camera along with the x,y,z position and "yaw" rotation of the drone and uses that to build a 3D model. Luckily for us, the drone encodes this in the EXIF info of the images. There is a lot more to it than I'll dive into here, but let's learn some of the basics, the promise, and the peril.

The Promise: Great Uses

These models can be used for everything from construction to facilities management to interactive online widgets to traditional CAD modeling. We can generate a point cloud to go into SolidWorks/CATIA/etc, export animation fly-bys, or export them as 3D PDF's to be used in the field. The way we generate these is also a bit less labor intensive and far cheaper than methods of old. So that's the promise of this tech - and the science.

The  "art" aspect comes in when we are finessing the model and minimizing insane errors that only a computer could love and generate. As you'll see in this update, we are still learning. But we're learning fast and finding out some neat things along the way which makes every model better.

Before jumping into the fray, let's start on a high note. Here's what we can do when it all comes together (it takes a sec to load and fully render):

Within the Pix4D software we can take measurements of lines, polygons, and volumes. Pretty awesome since this is all generated via photos. If you have a remote facility - or are very lazy - we can pull measurements and data out of something to a reasonable degree of accuracy.

It can also be presented and exported as:

  • Point Cloud: build an traditional 3D model in any CAD software and add structures or features to job sites, campuses, or facilities.
  • 3D PDF: Quick reference material for building maps, directories, or general reference.
  • Video Animation: A simple pre-determined user controlled flight path to highlight features or show a preferred route. An example is embedded below.

Modeling a Sisyphean Solar Tower

The offending tower, on a beautiful fall day, which looking back, may have filled us with false confidence.

Things didn't always go as planned., however, as the learning curve was steep.

We have flown a friend's solar panel tower many times, as towers are a great target for inspections and a huge challenge for photogrammetry.

Think about it: interlaced gray bars are hard to give depth to and singularly identify. It's been a bigger than expected challenge and helpfully kept our confidence in check while showing us advanced ways to fail, and by extension, to learn.

Furthermore, our original flights were done in the fall (an example image can be seen at right) whereas our later flights were done after the first snows. This means that the solar panels had been shifted to their winter declination, the trees had lost all their leaves, and everything on the ground was covered in snow. In short - the scene was completely different. We wouldn't be able to reuse any of the original data. This proved to be a big lesson for us.

 

Round 1: What is this an image of? Not a tower, that's for sure. Red dots indicate failed pictures. Click this image to expand.

Round 1: Way Too Few Images

See the image to the left. In short, we didn't take nearly enough pictures. For modeling buildings, a series of "oblique" images (those shot from the side) paired with a grid of "nadir" images (those shot from above) are sufficient.

However, here you can see that we didn't even really get a model to render from this data. There are quite a few reasons for this, such as the grid being too coarse for a small object like a tower, but predominately it's due to the red dots you can see in the image. That means the images aren't being used. So we lost around half our data right from the outset.

This is why more images are better - if for some reason some don't turn out you want to have plenty of extra (and they're free to take). Plus throwing out extraneous surplus data is a lot easier than generating new data later when you've already left the site, the weather has changed, and it's dark.

 

Round 2: Vertical Climb

Round 2: We can actually see a tower down there! Or at least a 2D representation of it. Click this image to expand.

On the left side of the Round 2 image (located at right) you can see two different types of pictures were taken. The first set is a set of images that "climb" up the tower. The second set is a fixed-altitude, fixed angle set that approaches the tower from far out and takes pictures as we fly closer.

On the right side of the image, you can see that we took a similar set of "fly in" pictures for another side of the tower. However, you'll see that only a 2D representation of the tower came out.

The conclusion we've reached is essentially this: the tower is such a flat, interlaced series of gray colors that it requires the extra up-close images in order to generate extremely clear points. We can use the further out "fly in" images to generate the depth, however to get the texture right we need these up close images. Finally, on a tower that is roughly 12ft by 12ft on its base, the overhead "grid" of images does very little besides help fill in the surrounding barn and trees.

Wrap-Up

In theory, we've found out which images are most critical for each different type of structure we analyze. However, in practice we will likely still collect all types of image sets: grids, fly-ins, and vertical scales. This is because the images are free. But it's good for us to know exactly what images help generate which features.

We are still assembling our portfolio of office buildings, bridges, towers, and other infrastructure types. If you're in AK and want your home or business as a 3D model, let us know. Or if there's just some cool infrastructure, let us know. Of course it's all free because you're our dear friends [and we can't legally charge anyone for at least another month :) ].

Spotlight on Cool Stuff

I hid the Spotlight a little deeper into the update this time but hopefully it won't disappoint:

Our dystopian future is nigh, clumsy, and pretty cool:

Tokyo police are using these nets to try and catch errant drones. Maybe K2D should one up them and build a pneumatic cannon to launch the nets with a dragline. That would be a lot sleeker. Still in the meantime, watching DJI’s get scooped up is pretty entertaining!

Yeah well this borders on illegal and dangerous but let’s just set that aside and agree that it’s kinda fun too.

More info: http://arstechnica.com/tech-policy/2015/12/tokyos-drone-squad-will-deploy-10ft-drones-armed-with-nets-to-police-the-sky/

Well that's it for this time. Happy Holidays, whichever you choose to celebrate, and Happy New Year too from the guys at K2D. As always, we love to hear from you so shoot us an email. We've been getting a ton of great feedback on these and lots of good ideas too.

Ben
bmkellie@outlook.com