Blog
Commercial remote pilots license
On 13, Sep 2016 | In News, UAV/drone | By Eric Compas
Under the new FAA Part 107 rules, Dr. Compas is taking the test to obtain a commercial “remote pilot airman certificate.” This will allow us to conduct University research flights under the Part 107 rules.
Rebuilding website
On 13, Sep 2016 | In News | By Eric Compas
Sorry for the mess! Please be patient as we rebuild our website. It was about time to do some cleanup around here.
sUAS vegetation mapping: First successful data collection
On 27, Jun 2016 | In UAV/drone | By Jeff Smyczek
After conducting our initial mission, we realized a problem with the images taken from the Tetracam. We were missing images from one of the lenses, which prohibited us from calculating normalized difference vegetation index (NDVI). After trying to resolve the issue with Tetracam support, we had to send the camera back to Tetracam headquarters for repairs. Sending the camera half-way across the country cost us valuable research time as we intended on monitoring vegetation health throughout the growing season.
Once the issues were resolved, I performed pre-flight inspections and took off!

Jeff flying the 3DR Solo
The mission went smooth and I was able to capture our first useful data from the Tetracam. I also collected biomass samples for correlation between data received from the air and live biomass weight once it was dried. I then uploaded the images and began processing in Pixel Wrench, Pix4D, and ArcMap to calculate NDVI from the orthomosaic.
After images are assigned a GPS position, Pix4D is able to locate the images and create a ray cloud shown here:

RayCloud produced in Pix4D
From the ray cloud, an orthomosaic is created that we can upload to ArcMap and calculate NDVI. NDVI is a measure of live green vegetation in a target and allows us to analyze vegetation health remotely. We are still working on the image calibration procedure, but our preliminary NDVI calculation from the first flight is shown below.

NDVI values range from -1 (red ) to 1 (blue). The blue represents healthy vegetation while red areas correspond with unhealthy vegetation. Anomalies exist within the images and need further explanation.
While these accomplishments were significant, we were still seeing artifacts (straight lines and vignetting) in our imagery and resulting orthomosaic that will require further work.
sUAS vegetation mapping: Testing and mounting the Tetracam RGB+3
On 12, Jun 2016 | In UAV/drone | By Jeff Smyczek
Thanks to the University of Wisconsin-Whitewater Undergraduate Research Program and the Wisconsin Space Grant Consortium, I was able to conduct my first research project as part of the Summer Undergraduate Research Fellowship (SURF) program. I began on a project mapping vegetation health from a small unmanned aerial system (sUAS) or “drone” in a natural restored prairie called Fair Meadows. The purpose of this project is to answer the question: Can multispectral drone imagery be used to accurately monitor vegetation health in a variable natural setting? To attempt answering the research question, Dr. Eric Compas and I embarked on a journey we knew would have its share of problems. Some research has been conducted thus far utilizing new developments in technology, and we hoped to add to the knowledge and use of sUAS.
The first step of the project was to learn and prepare the new Tetracam RGB+3 for flight. The Tetracam RGB+3 is a multispectral camera that has four lenses including broad-band RGB and narrow-band red, red edge, and near infrared (NIR). The narrow bands allow us to generate several vegetation indices such as the normalized vegetation difference index (NDVI). After scouring the manual, we learned best practices for the Tetracam RGB+3 and configured settings that we thought were appropriate for our study. With our newfound knowledge, we started on-the-ground testing, taking pictures of reflectance standards for proper image calibration. With calibration still in the works, I began fabricating a custom mounting system to attach the camera to the drone. We had some significant constraints — we needed to stay within the 3DR Solo’s payload of about 420 grams and the RGB+3 weighs 400 grams. We also had to make sure to not shift the center of gravity of the drone with this significant weight.
Like any engineering, prototypes are necessary and this mounting system was no exception. My first mount held the camera extremely well, so we took it out for a test flight. The mount held up and we conducted our first mission. Upon inspecting the image at the lab, we noticed a slight blurriness and went back to the drawing board. We decided to fabricate a new mount that would isolate vibrations from the drone. After tossing around ideas, we settled on using a bungee cord suspension system (with help from Tetracam). Success! Images from the next flight were smooth and the mount held the camera securely as it traveled 50 meters above the prairie.

Tetracam and vibration-dampening mount

3DR Solo
Drone flights over WI state property
On 15, Sep 2015 | In UAV/drone | By Eric Compas
We’ve been discussing with Wisconsin Department of Natural Resources officials potential UAV/drone flights over their property. Turns out there are laws that do restrict flying over some state property including using drones to hunt or fish or using drones in a way that could disturb wildlife. An addition, some areas such a state parks and state natural areas are completely off limits. Overall, these restrictions seem fairly reasonable.
Additional analysis of Milwaukee water quality data
On 04, Aug 2015 | In Water quality | By Eric Compas
We’ve conducted some initial analysis of our data from last Friday (more here). Our goal was to be able to better visualize our sample data along the stream corridor. As a geographer, I turn to a map as my first impulse, but in this case, it’s certainly not the only way of viewing our data. In particular, we were interested in comparing the values from each of our units and better visualizing trends on our metrics along the stream corridor.
So, from our magic GIS hat, we pulled out some dynamic segmentation tools to “linearize” the data we’ve collected. Put simply, we moved each data point to a stream center line as defined by USGS’s National Hydrography Dataset (NHD) yielding a distance along the stream. We could then plot our metrics, e.g. dissolved oxygen, versus distance along the stream in a conventional scatterplot allowing us to compare our two sample units and samples through time.
Here’s a visualization of the “linearization” of our data:

Linearization of stream sample data
In ArcGIS, each data point (in red) was moved to the NHD stream center line (points in blue) if it was within 100 meters of the center line. In addition, each new blue point was given a distance along the stream stretch.
As of yet, these data are unfiltered. We haven’t removed known extraneous and/or invalid readings.
First, data from the Kinnickinnic
(which can be compared to the map here)
Note that the blip in temperature is due to Unit #2’s temperature element being removed from the water for a time. Temperature is almost the same for both units.
Dissolved oxygen is also very similar (thankfully, after quite a struggle) between the two units. The mess to the left in the graph is explained below.
Electrical conductivity also shows close correspondence between the two units except with fairly high values. We use a two point calibration at 84 uS and 1,413 uS, so this divergence outside the calibration range is not all that surprising.
Our pH values, while still exhibiting similar trends, show the greatest discrepancy between the two units that, disconcertingly, varies throughout the sample. We’ll be revisiting our calibration procedure for the pH probe to make sure we’re consistent with each unit.
For the Menomonee River
For the Menomonee, our units again performed similarly, with all but pH matching fairly closely.
Why the jagged or seemingly noisy segments? This is due to including both the paddle out and back along each segment on the same graph. So, for dissolved oxygen on the Menomonee for example, Unit #3 (in orange) returned significantly different values on the way out as compared to the way back in the 6,200 to 6,700 meter range. Mike, paddling Unit #3, took a different path on the way back in this segment and there appears to have been a significant cross-sectional change in the DO across the stream profile. Since we’re linearizing and combining both in- and out-paddles, our line graph bounces back and forth across these values.
Obviously, we still have a lot of explaining to do for each of these trends. We’ll leave that for another post for now.
Testing our water quality arrays in Milwaukee
On 02, Aug 2015 | In Water quality | By Eric Compas
Mike, Karl, and I had our first big day with the water quality arrays. With everything working fairly smoothly, we headed to Milwaukee for a long day of kayaking. Our goal was to paddle up and down the Kinnickinnic, the Menominee, and Milwaukee Rivers starting from the Milwaukee County boat ramp at River Front (at the east side of the Kinnickinnic and Milwaukee confluence). We were hoping to see differences between the freshwater estuary areas of the lower river reaches to the more stream-dominated upper sections.

Sampling in Milwaukee
Starting around 10am, we paddled all day (mostly into a stiff wind) to cover almost all the ground we were hoping to. We stopped around 6pm after one of Mike’s arms fell off (well, nearly) and around 14 miles of paddling. Both Mike and Karl had units on their boats running all the time, so any stretch of water was sampled four times — a good opportunity to see how our units compared with themselves and each other.
Here’s a look at the raw data from our units starting with dissolved oxygen (DO). Click on the Legend and Layers icons in the upper right to view the legend and to switch between temperature, pH, dissolved oxygen, and electrical conductivity (I’d suggest using the link to full screen map to explore the data in detail). Also, once you’re zoomed in, you can click on any point for detailed sample information.
(Note that the map refreshes itself every minute — this is for when we’re collecting data in real time.)
The map shows considerable variability for most of the metrics we measured. The most significant changes — seen best in the temperature layer — is from the lake-influenced section of each “river” (colder water) to the stream-influenced section of each (warmer water). We were somewhat surprised, though, in how smooth this transition was, particularly in the greater surface flow from the Milwaukee and Menominee Rivers.
The second noticeable change was the increase in electrical conductivity (EC) as we moved upstream. Both the Kinnickinnic and Menominee Rivers showed increases in EC as we moved into surface-dominated waters (particularly when the river current was noticeable) indicating high amounts of dissolved solids in this largely urban watershed (more info at Milwaukee Riverkeepers and SEWRPC) (likely salts from roadways, runoff from Mitchell Field, nutrients from urban yards, and industrial waste water). The Kinnickinnic, in particular, had a region of very poor water quality with high EC values and very low DO (lowest reading of 0.97 mg/L!).
We also were very pleased with the consistency of the measurements we were getting from each unit. They both matched fairly well at initial glance with a couple of discrepancies along a couple stream reaches. We also encountered problems with our server as we paddled the Kinnickinnic, so there are a few gaps in this sample stretch (which is recorded on backup files on the phones that we haven’t retrieved yet).
Stay tuned — we have additional analysis planned to explorer values from both units and other ways of visualizing trends.
Testing and calibrating water quality array – Part 3
On 18, Jul 2015 | In Water quality | By Eric Compas
With the rebuilt arrays, time for more in-depth testing… Again, I’m growing more dubious of the claims of Atlas Scientific and how easy it is to obtain high-quality metrics from their probes.
For this test, I was interested in seeing the influence of each probe on the other in the same body of water. In other words, does one probe change the value of another when they’re near each other in the water? With our kayak mount, we can place probes on either side of the boat and wanted to know the optimal configuration.
Setup:
- Arduino Nano connected to I2C board with pH and DO, EC through PWR-ISO, and Bluetooth (HC-06). Ground wire bridge between PGND on pH and DO (as per Atlas-Scientific instructions)
- All probes EZO version in I2C mode
- Bluetooth in connected mode with phone app
- All probes calibrated via Altas-Scientific methods. 3 point for pH (4,7,10), 1 point for DO (100%), 2 point for EC (dry and 84µS)
- Three buckets of equal amounts of tap water (~3.5 gallons) drawn from tap at same time and let setting for two hours
Procedure:
- Calibrate probes immediately before test
- Place each probe in separate buckets to start.
- Record values for 5 minute with each of the following combinations (isolated, in pairs, all three):
Table of testing probe combinations (number of sample bucket probe placed in)
Readings | pH | DO | EC | Temp |
---|---|---|---|---|
0-41 | 1 | 2 | 3 | 2 |
42-85 | 2 | 2 | 3 | 2 |
86-129 | 1 | 2 | 3 | 2 |
130-172 | 3 | 2 | 3 | 2 |
173-215 | 1 | 2 | 3 | 2 |
216-258 | 1 | 3 | 3 | 2 |
259-300 | 1 | 2 | 3 | 2 |
301-343 | 2 | 2 | 2 | 2 |
344-388 | 1 | 2 | 3 | 2 |

Testing probe interaction/interference with three vessels
It’s clear that we are getting interference from some of the probes. In particular, the pH and DO probes influence one another. When they’re both first added to the same vessel, their values are thrown off by a considerable amount for several readings and then both stabilize. The DO readings are close to initial readings (off by ~0.5) after 2-3 minutes; however, the pH values stay considerably lower (by ~1.25).
And, somewhat as expected, the DO probe also appears to be influence by moving it from one vessel to the next (so, it does matter which probe is moved to another vessel). As it’s exposed to the air, it rises approximate half a point and then gradually declines to its previous value. Note that this effect alters values for almost the whole 5 minute sample period.
I did some additional ad-hoc testing of the each probe moving through the water. Only the DO probe shows any noticeable changes in values, and it’s showing about a 1 mg/l increase if any air bubbles are present when moving.
Lessons:
- pH and DO can’t operate near each other — they’ll go on opposite sides of the boat. The connected grounds suggested by AS didn’t eliminate the interaction.
- The isolated EC didn’t appear to influence any other probe. I assume this is due to the power isolator, but this hasn’t been explicitly tested.
- All three probes can’t work near each other — need to separate
- DO needs to stay underwater — any surfacing can influence quite a few readings
- temperature doesn’t appear to influence any of the other probe’s values
We’ll go with DO and temperature on one side of the boat and pH and EC on the other.
Water quality array – temperature compensation
On 14, Jul 2015 | In Water quality | By Eric Compas
In trying to get our two units to agree, we noticed that the DS18S20 temperature probes seem to be off 0.5 to 0.75 C from one another. Given that all probes use temperature compensation (and the dissolved oxygen uses it for calibration), this slight difference could be exacerbating any differences we were noticing.
So, we placed both temperature sensors in a cooler of ice water to generate a manual, one-point temperature compensation for each of our units. For each unit, we let the probe sit in ice water for at least 5 minutes and then took five consecutive readings one minute apart from one another.

Calibrating temperature sensor in ice bath
Testing and calibrating water quality array – Part 2
On 13, Jul 2015 | In Water quality | By Eric Compas
After the data quality issue with the first array design, I contacted Atlas Scientific(AS) about our problems. Jordan at AS suggested that we make two changes: 1) connect the probe grounds on both the dissolved oxygen and pH probes, and 2) isolate the electrical conductivity probe from the rest of the probe using their power isolator. Since neither of these changes are documented on their website or support materials, I was frankly growing a bit concerned about the claims made on AS’s website — was this really our problem?
Potential issues/interactions:
- One of our probe arrays was a year old; the DO probe was showing some non-linearity in its readings
- The probes electronics were interactions with one another on the microcontroller board
- The probes were interacting with one another in the water (very likely with the EC probe)
- The power draw from the Bluetooth device was interacting with one or more of the probe’s electronics
We addressed issue one by using a third set of probes that were still in the box (purchased within a month of array #2) and retired array #1 for now. For two and three, we followed the suggestions of AS above. In addition, we switched from the 3.3v Teensy 3.1 microcontroller to the 5v Arduino Nano. Jordan at AS has expressed some concern that the probes kits may be underpowered at 3.3v and the power isolator would only work at 5v. Effectively, we rebuilt the array from scratch:

Rebuilt water quality array – round 2
As for issue #4, preliminary testing is showing that whether the Bluetooth is in “discovery” mode (light flashing, higher power draw) or connected impacts the EC value but not the other measurements. In “discovery” mode, the EC value is approximately 0.5 µS lower than when the device is either connected or unplugged (why? common ground? power drop?). This seems to indicated that the connected mode is similar to not having a Bluetooth device is likely to have minimal impact on the EC values. It does indicate, though, that calibration should only be conducted with the Bluetooth unplugged or connected to the phone app.