Wednesday, May 13, 2015

UAS Demo

Introduction:


The purpose of this lab was to demo the use of a UAS. This was done at the UW-Eau Claire Priory, as have the previous few exercises. The demo included running through the preparation, setup, and execution of a UAS survey. The first UAV (unmanned aerial vehicle) used was an IRIS equipped with a GoPro.


Methods: 


After arriving to the Priory, Professor Hupy got out the IRIS UAV the controllers and the base station computer and tablet. The first step to prepare for a mission is to go through a series of checks. These are held in a spreadsheet, and include things like checking weather, making sure the UAV's hardware is secure, that the rotors are tight, that all of the necessary connections are made. A part of this checklist is verifying battery life, and this was an issue. The IRIS's battery short circuited during the process. A student was sent home to get another one, and to get some AA batteries for the controller. In the meantime, Professor Hupy explained a little bit about the base station, and the mission planning software. We each were able to try drawing routes on a tablet, and we compared the strengths and weaknesses of using the laptop vs the tablet for mission planning. Basically, the laptop allows for more in depth setup, but the tablet is quite convenient for drawing routes etc after setup.

Once Michael returned with the batteries, checking continued. This included verifying connections between the UAV base station and transmitter, and verifying satellite connections. Once these were done, we were ready to fly. Professor Hupy manually did takeoff, and then switched to autopilot at 40m to run the mission previously drawn on the computer. Another student was at the base station. Afterwards, they used the autoland functionality to bring the UAV back to the starting point.

Professor Hupy was hesitant to use the next UAV because of the weather conditions. The wind had picked up, with gusts up to 20mph, and sprinkling rain seemed possible. The class also took some ground points with the TopCon GPS system (see the Survey Methods post for more info on this). After re-checking the same things as above, this was ready to fly. This UAV is considerably more powerful than the IRIS, so Professor Hupy cautioned us to stay back in case the gusts blew it towards us. After the flight began, the UAV was put on auto, and began its route. However, the wind quickly began a problem, and a gust nearly flipped it for a second. This was due to the combination of a relatively sharp turn (where the UAV has to tilt) and the wind. The UAV couldn't right itself to get back on course properly. Michael at the base station noted this, and called for a return to launch to avoid a crash. This was a lesson on how important PIC (Pilot in Command) and PAC (Pilot at Controls) communication is. Professor Hupy couldn't see the UAV's planned route, so he wouldn't necessarily know why it was operating incorrectly. Michael at command quickly realized what was going on, and told Professor Hupy, who then called it back to launch. 


Discussion:


This exercise was very interesting, because we were finally able to see a UAS in action. It is important to understand that a UAS (unmanned aerial system) really is a system rather than just a unmanned or unpiloted vehicle. There is extensive planning and preparation that must be done to properly carry out a UAS mission, and it was very interesting to see this first-hand. I was also impressed at how effective the UAV's were in the wind. They were very steady and stuck to their routes really well. It was really useful to see the PIC - PAC interaction, because if they hadn't been on the same page, things could have gone wrong. This is important in any UAS mission. 

Tuesday, May 12, 2015

Navigation with GPS

Introduction:


The purpose of this exercise was to set up a navigation course for future students using points plotted on a GPS. Each group was required to map five points, then navigate to them using a Trimble GPS unit to mark them. At each point, flags were installed, marking them for use by future groups. An important aspect of this exercise was setting up a proper map for navigation on the GPS. The same navigation map that was used in the previous navigation exercises was imported onto the GPS.  For more information on the navigation map's creation refer to this post.

This is an image of the map used for navigation. This was also used on the Trimble GPS unit.
The Trimble Juno GPS device used for navigation and point storing. 


Methods: 


Upon arriving at the priory, members from each group convened to plan areas to cover with each course. My group's course was set for the Northwestern corner of the Priory. The GPS, along with the navigation map and compass were used to navigate to this area. Once there, points were selected based on their relative locations to one another- we didn't want to make it too easy or too difficult. Their locations were also determined based on accessibility with respect to elevation change, vegetation and such factors. Trees were used as points, so they were flagged with fluorescent surveying tape and labelled. GPS points were also taken. However, our group experienced a temporary issue with this. Before going out into the field, the GPS unit was set up without a feature class to be edited. This means that the map that the Trimble unit had loaded didn't allow for adding any features. Because of this,  a new quick project had to be created, and points taken on that. This made the GPS unit useless for navigation purposes.

After collecting the field data and returning to campus, the five points had to be checked back in, and mapped. The results are shown below.


Results:


One of the points in the navigation course

One of the points in the navigation course
One of the points in the navigation course

A basic map of the locations of the points marked in the UWEC Priory

Discussion:


As mentioned above, the inability to edit any feature classes in the Trimble GPS map rendered the GPS useless for navigation. We were then forced to rely on map and compass navigation, as was done in previous exercises. This is just another lesson about how technology can fail, so it is important to have the background knowledge to be able to subside without it. The rest of the process went smoothly, and favorable weather conditions made the exercise very enjoyable. 


Conclusion:


Navigation methods today have shifted more and more towards GPS, so being able to properly set them up, use them to navigate, and collect data on them is a very important skill. We learned the lesson that overlooking just one little element of the data check-out process can render the GPS more or less unusable. However, building upon previous exercises, we had the skills to be able to navigate through our assigned section of the Priory and set up our course and return to our beginning point without any major issues. 

Saturday, May 2, 2015

Navigation With Map and Compass

Introduction:


The purpose of this lab was to use simple distance / azimuth measurements to navigate UW-Eau Claire's land at the Priory. This involves simply using a compass, pace measurements, and the navigation maps we created earlier in the semester. For more information on the navigation map's creation refer to this post.

Before going out into the field, it was vital that to do some background research on the basic principles of orienteering. The professor and a classmate compiled some resources to facilitate this. They also printed the navigation maps referenced above for each group, and provided us with compasses.

The object of this lab was to navigate to a number of different points given to us by the instructor using just our distance / azimuth tools and pace counts in groups of three. For more information on distance / azimuth surveying, refer to this post.

This is a compass similar to the one used in this exercise. The general idea is that first, the compass is lined up from the starting point, to the desired point. The red circle can be spun in order to line up with North on the navigation map, then the whole compass is spun so that the actual North arrow is within the red outline pointing north. Finally, the arrow at the top of the image is followed as the direction of travel.

Methods: 


Upon arriving at the priory, each group convened to plot coordinates given to us by the instructor. There were 5 points total given to us in a UTM coordinate system. The first step was to plot these on each map. Next, our compasses were used to calculate the azimuths that would be needed to follow to go from one point to the next. The instructor and a classmate also gave the class important advice on proper methodology to follow to maintain a valid course.

Groups planning for the navigation exercise


To find the proper bearing, the compass must be laid on the map on a flat surface. It must then be lined up from the starting point (the location in the Priory parking lot) to the first point marked by the instructor. Then, North must be lined up with North on the map. This is done by spinning the red circle (see above image for details). Next, the whole compass can be picked up and spun so that the actual North arrow falls exactly within the outline. This is commonly called "putting red in the shed." Being sure to maintain red in the shed, the arrow at the other end of the compass is followed. This is the direction of travel.

Another important step before heading out, is calculating the approximate number of paces to expect before reaching the desired point. The navigation map that was used had a pace count for a group member, so the scale bar had to be used to calculate approximate number of his paces between each point. Note: we calculated this along with the bearings for each point before heading out into the field. This ensured that we had a flat space to calculate them on, and was ultimately less trouble than attempting to figure it out in the field.

A snapshot of the field navigation map that was used. Note the measurements taken on the map. This was done before surveying. 

Groups of three are ideal for this type of surveying. One person is in control of the compass, and directs another group member to a landmark that falls within the proper bearing, then the pace counter can walk to that person. Only after this is done can the compass-holder follow. This ensures that if there is some error, that reference point can always be returned to, and measurements can be retaken from that spot.

There were a total of five points to find using this methodology, and each group had a different order assigned. 


Results:


The first point that this group was assigned to. This particular point was the hardest one to find, as it was down in a ravine 10 to 20ft, and because we hadn't solidified our methodology yet. See the discussion section for more information on this.
The second point assigned to this group.
The third point
The fourth point assigned
The fifth and final point

Discussion:


There was some difficulty in finding the first point. Our azimuth measurement must have been a little bit off, because we ended up considerably East of the desired point. This was exacerbated by the fact that this was the farthest distance between points in the entire study. There came a point, that having little success with our three person survey method (described above), that we all grouped together looking around the nearby area for the point. This is a natural human response to being lost, but it only makes the problem worse. This means that we lost our points of reference, and no longer had any way of knowing where our bearing was. Luckily, we stumbled upon the point, and were able to continue the activity, being sure to be more careful in following our azimuth. The rest of the points were found relatively easily. To find the third point assigned, (point 1) it was advised that we first make a bee line towards the parking area in which we started, then following an azimuth a short distance to the point. This allowed for easy measurements, and less difficulty traversing difficult (sometimes impassable terrain). 


Conclusion:


Knowing how to navigate using simple tools is essential for field work, because as is well known: technology can and will fail. It is very plausible that a GPS wouldn't have reception under the heavy tree-cover and in the ravines of the Priory, so using a compass and navigation map was very possibly the only way to navigate the land there. 

Saturday, April 25, 2015

Topographic Survey Methods

Introduction:


This exercise provided an introduction to topographic surveying methods during two weeks, and were required to use two different surveying methods to complete the tasks. The goal in these two exercises was the same: to collect survey-grade GPS points outlining the microtopology of the mall area on the UW-Eau Claire Campus.

This image shows the area of the UWEC campus that we were to examine with our topological surveys.

The difference was in the methodology and technology used to complete this. The first week, students were provided with a TopCon HiPer GPS unit, and Tesla Handheld. The handheld unit connects to the HiPer GPS via bluetooth, and to a Verizon MiFi mobile internet hotspot to maintain internet connection throughout the survey.

Handheld and Internet Hotspot

TopCon Tesla Handheld GPS unit. This unit is also capable of collecting GPS points, but not with the accuracy that the HiPer provides, so we used it for control purposes through use of its Magnet Field software.  It was mounted on the tripods for easy access. This unit was used in both sampling methods.
This is the Verizon MiFi unit that provides a reliable wireless internet connection for surveying. This was also mounted on the tripods.

Dual Frequency GPS Unit

TopCon HiPer GPS Unit. This is mounted on a tripod with leveling gauge, which maintains data accuracy.

Total Station

TopCon GPT - 2500 Total Survey Station. This device operates differently than other GPS retrievers: it is stationary, and its laser is then directed at a reflector pole to collect points of interest. An occupied point must first be acquired. This is where the Total Station will stay throughout the survey. Next, a backsight point must be taken. Both of these can be acquired using a GPS. The backsight point must be shot from the total station to the reflector pole as well- this sets up the ground to grid relationship to ready the total station for data collection. 
Using these two very different methods, the same goals were reached in this exercise. It allows for a comparison of the methodology to determine the equipment best suited for a given job. 

Methods: 


The first week in the field included using the HiPer Dual Frequency GPS Unit to conduct our topo survey. It was very important to follow the instructor's detailed lesson on how to operate the software, and what steps to follow in preparing to go out into the field. This included properly connecting the Tesla handheld unit to the HiPer GPS, and connecting to the MiFi internet, along with properly creating, setting up, and opening a job for use in the Magnet software. Once this was done, we went out into the field. With the software properly set up, and the GPS and handheld mounted to a tripod, collecting points was relatively easy.

A picture of me operating the HiPer GPS / Tesla Handheld system. The tripod allows for leveling, which increases the accuracy of the point. When taking each point, we used point averaging for 5 points, which took around 7 seconds, based on the GPS' signal. This provided a more accurate point than the 3 point default setting. A total of 100 points were taken using this method, and were later imported into ArcGIS for analysis.

The next week, the Total Station method was to be used. It was equally, if not more important to follow our instructor's directions in this exercise, as this method requires even more equipment, software, and methodology. In fact, it took my partner and I one failed attempt to collect points before we were able to set everything up correctly and collect data.

First, it is necessary to collect an occupied point. As previously mentioned, this is the place where the total station will stay throughout the survey. The occupied point was collected using the HiPer GPS in the same way it was used to collect points the past week. When this was accomplished, it was necessary to collect a backsight point. The HiPer collected this point as well. These points were marked by flags, and stored in the Tesla handheld unit for use later in the survey. Next, we disconnected the Tesla unit from the HiPer GPS, reconnecting with the Total Station. This was not a smooth process, and as was discovered quickly, the HiPer had to be turned off, and the Total Station had to be restarted for the Tesla to connect to it properly.

Next, the Total station had to be set up directly above the occupied point. This involved setting up the tripod, placing the Total Station on it, and then using a laser finder device to ensure that it was directly above the flag that had been set previously. Also, It had to be leveled, using a number of leveling gauges on the device. This was difficult, but necessary to ensure proper accuracy: these high-grade survey systems are only as accurate as they are properly used.

In the Magnet Software, the backsight and occupied point had to be set up before points could be collected. This included selecting the previously collected points from a list, and then sending someone with the reflector pole to the backsight to be shot with the Total Station's laser. This also included inputting the height of the Total Station and the reflector pole. Once this was finished, properly defining the ground/grid relationship, data could be collected.

Groupmates shooting the Total Station laser at the reflector pole. Once this was properly aligned, another person would save the point on the Tesla device (See below)

Myself and a groupmate shooting the laser and recording points

Myself and a groupmate recording points and shooting the laser
A number of points were collected using this methodology. The increased group size this week allowed for each member to cycle through the different tasks associated with operating the Total Station.

The data was then exported from the Tesla device to a text file, and transferred to the PC. It was then edited slightly before being imported into ArcMap as (X,Y, Z) data. The resulting points and micro topologies are shown below in the results section. See the below tutorial video created by Martin Goettl for more information on exporting data from the Tesla device.



Results:


This image shows the results of a Kriging interpolation method on the collected data points after being imported into ArcMap. It includes the original points, as well as the interpolated surface. For more information on Interpolation Methods, see ArcHelp
Kriging Interpolation surface with overlayed points collected using the Total Station.


Discussion:


The results of the two maps are not extremely significant. It is certainly worth noting that the dataset collected with the HiPer GPS system did not provide an accurate representation of the topology of UW-Eau Claire's campus mall area. The points were not interspersed enough to provide the Kriging model with proper data. The resulting dataset for the Total Station is much more complete, and provides a decent model.

However, what is more important than the maps themselves is the different methodology used to collect them. There were two very different methods used during these two weeks, and I think that each method has its pro's and con's.

The HiPer Dual Frequency GPS system was relatively easy to use, and was operable by just one person. However, it was a hassle to move it from point to point, then level the tripod legs before collecting each point.

The Total Station method, once set up was faster to operate, as it just took someone moving the reflector pole, and another person shooting it to take a point. It was less clunky to move around, but was extremely difficult to set up. As mentioned before, my partner and I went out into the field once and never were able to collect data because the setup was done incorrectly. As is mentioned above, this method also requires at least two people for operation, which can be seen as a con.

Each system is capable of collecting very accurate data, so it is a matter of the application to determine which is better suited for a given job. Since the HiPer system is harder to move around quickly, it would be recommended to use the Total Station on jobs that require many data points. On the other hand, for smaller-scale studies, or for ones undertaken by only one person, the HiPer system would be ideal. 


Conclusion:


Using two different methods with different tools to accomplish topographic surveys allows for comparison of methodology. Being able to determine which geographic tools are best suited for a given job is important, as it can make field work much simpler. In this particular case, each system is able to yield very accurate data, so it is a matter of determining which is more practical in certain situations. This is an important skill to have as a geographer: being able to select the most effective tool from an array of them, and to plan around it accordingly. As other exercises in this blog demonstrate, it is also to not rely too heavily on any one method, as technology can and will fail. Because of this, it is important to understand the multiple methods that are available to accomplish a given goal.

Sunday, April 5, 2015

Distance Azimuth Survey

Introduction:


This lab was an introduction to surveying using the distance azimuth technique, which is simple but usable in many situations. With today's technology, it is possible to acquire very precise location data, but it is important to recognize the fact that this technology can fail. Professor Hupy stressed this, saying that it WILL fail, whether it be from low battery life, adverse weather conditions or other failures. Because of this, it is vital to know the basic techniques in field methods to be able to function effectively, independently of advanced technology. This exercise included using a TruPulse laser distance finder to record distance and azimuth readings for a minimum of 100 data points of our choosing. 

TruPulse laser that was used to record distance and azimuth readings for our features. 

My partner Emily Moothart and I chose to survey cars in the Phillips parking lot of UW-Eau Claire.

This is an aerial image of our study area. Note that this photo is not current, but the parking lot that we were studying still is very similar to this. 
This image shows a panorama view of the parking lot we surveyed.
We were also advised to take note of the concept of declination before starting the survey. This basically refers to the difference between magnetic North and true North. This can cause error in taking azimuthal measurements based on location changes over time. Luckily, our particular location here in Eau Claire, WI has a small declination value, so it is negligible in this study. For more information on this concept, see the below video. 



The other concept that is important to consider is the difference between explicit and implicit data- in this case as it relates to grids or coordinate systems. An explicit system uses real coordinates to delineate features, where an implicit one uses arbitrary grids to do so. The result is that implicit grids only show relativity between features, without reference to its spatial location, where explicit grids do include that. This particular exercise is a kind of a cross between the two. We will calculate the locations of features without using any actual coordinates, but afterwards we will assign our starting point real GPS coordinates so that the whole thing will be usable in the GIS. 

Methods: 


Once we decided our area of study, we went to our starting point to begin surveying. We set up the TruPulse and Tripod and began taking readings of cars from left to right. The process was slow at first, but picked up relatively quickly as we became accustomed to the process. I personally operated the TruPulse as Emily recorded the readings as well as the type and color of each vehicle, on paper. 

An image of me firing the laser at nearby parked cars.
Photo by: Emily Moothart

Another image of data collection with TruPulse unit.
Photo by: Emily Moothart


It was a very warm day, but heavy winds made data collection difficult at times. The tripod would shake in the middle of firing the TruPulse, which would not allow it to get valid distance or azimuth readings. We switched halfway through so that each of us got experience in operating the laser and recording. Due to time constraints, we weren't able to collect as many points as would have been ideal. Another group of students needed to use the equipment, so we settled with just 92 features. 

After returning inside, we transferred our data into an Excel spreadsheet for later use in ArcMap. A preview of some of our points is shown below.

This shows the excel document containing points for each feature we surveyed. 
An important thing to note before proceeding into ArcMap for mapping the surveyed data is to take note of the point of origin. This means noting the exact GPS coordinates of where we were when we were firing the laser distance meter. Using one that is easily identifiable by satellite imagery is a good idea. To find ours, we added a placemark to Google Earth imagery to yield the coordinates of where we had just been conducting our survey outside. Once found, it was added into our Excel table in x and y fields. 

Next, the Excel file was imported into the geodatabase, and we used the Bearing Distance To Line tool to convert the its information into a line feature class. 

Bearing Distance To Line tool. The first field requires the table with the inputted distance/azimuth data. The X and Y fields are the GPS coordinates mentioned above of the starting point. Logically, the Distance Field asks for the distance reading, and the Bearing field requires the azimuth reading. The rest of the options should remain default so that the distance unit remains meters, the bearing unit remains degrees, and the Spatial Reference remains GCS_WGS_1984. This spatial reference operates well with the coordinates used for the starting point. 
This tool's output results in a number feature class of lines, heading to the angle recorded in Bearing and ending at the distance recorded in Distance. This is shown below in the results section. Once this tool creates the lines, the Feature Vertices to Points tool can be used to create a point feature class from their endpoints. 

This image shows the location of the Feature Vertices to Points tool. The tool is simple, but it is important that the "Point Type" field is set to ENDPOINT so that only the endpoints are created, rather than the endpoints and beginning points. 
The results of this tool are shown below. 

Results:


This is the final map showing the results of the distance/azimuth survey and its integration into ESRI ArcMap. Features are classified by type.

Discussion:


The outputs from the tools indicate that there is substantial data associated with our surveyed points. After searching through our dataset, I don't believe that there is any input error. This means that the error must have come from taking our readings. When looking at the image of our lines, it becomes obvious that even a little bit of error in calculating the azimuth will result in a large margin of error for the resulting point. An even more probable source of error comes from recording distance. If we didn't get a proper fix on the feature we were aiming at; say we missed and fired at a tree behind the car, this would become apparent on the above map. 

A strange pattern is shown in the final map where the points as they get farther away seem farther and farther south of their actual locations. Though it may be worth noting that the aerial image used as a basemap is not current, it really should not affect the distribution of our points because cars still park in the same places as they did when the image was taken. A more likely explanation is the fact that the laser's accuracy decreases as features get farther away. This comes from the TruPulse's decreasing accuracy with distance, but more significantly with the user's error as distance increases. It takes a couple of seconds to hold down the fire button on the device, while maintaining a fix on the desired feature. At high distance, this becomes difficult. Also, the adverse weather conditions should be noted as a potential source of error. If the tripod was moved by wind, all subsequent readings would be slightly off. 


Conclusion:


Though we did use an expensive laser distance finder device in this lab, it could have been conducted using much simpler tools. The purpose of conducting this survey was to familiarize ourselves with alternate methods for calculating spatial relationships among features using an implicit coordinate system. This type of survey is useful in situations where access to advanced global positioning may not be feasible either because of lack of resources, or because of technological failure.

Monday, March 30, 2015

Data Collection II

Introduction:


In this assignment, we collected microclimate data throughout the UW-Eau Claire campus, using similar methodology as last week's exercise. This involved deciding on a standardized geodatabase and feature class for use, deploying it to each Juno Trimble GPS unit, collecting microclimate data in pairs using Kestral weather meter, checking in each group's data, and merging all data into one cohesive microclimate dataset. With professor Joe Hupy gone for the day, we were required to work together to properly carry out these procedures, involving special help from students Zach Hilgendorf, Aaron Schroeder, and Michale Bomber, for the tasks of distributing GPS and Kestral units, deploying data to them, and checking in/merging data after collection.
Juno Trimble GPS unit. This unit uses ESRI's ArcPad application, which allows for GPS collection into a geodatabase.
Kestrel weather meter, which can be used to read temperature, wind speed, wind chill, dew point, percent humidity and a number of other climate figures.


Methods:


Before heading out into the field, it was important that everyone was able to deploy the data properly to their Trimble GPS units to ensure standardized data collection. This involved the same methodology as in the previous exercise. Refer to previous blog post for more information on this process. Armed with our GPS units with properly deployed geodatabases, and Kestral weather meters, we were ready for the next step.

We divided our UW-Eau Claire campus area of interest into 7 different sections: one for each group of 2 students. My partner Nick Bartelt and I were assigned the northernmost section, the one labeled '8' above. This area includes the campus footbridge and the area around Haas Fine Arts Center.

Recall that our microclimate data collection included taking readings on the Kestral weather meter for temperature at the surface and at two meters, wind speed, wind chill, dew point, and humidity. It is worth noting that even though we included a field for wind direction, we excluded it from data collection because we didn't have a tool to take the reading quickly and accurately. For further information on these fields, refer to the previous two blog postings. When we went into the field, we began with Nick taking the Kestral readings, and myself recording them into the Trimble GPS unit. We began along the footbridge, continuing on around the trails near Haas Fine Arts Center. We switched half-way through our study and ended up collecting over 60 GPS points. We had very few hitches in data collection thanks to our previous test run.

After collecting as many points as possible in our allotted time-frame, we returned to the classroom to check in our data. The classmates mentioned above helped us with this process, and completed the processing by merging each group's data into one point feature class. For more information on this, refer again to the previous blog post.


Results:


I chose to process the GPS point using the IDW (Inverse Distance Weighted) spatial interpolation method. This method estimates cell values by averaging nearby cell values in a weighted manner; cells that are closer to the cell value being calculated have more influence on the averaging process. For more information on this interpolation method, see ArcHelp. I overlayed the interpolated surface on top of the basemap for reference. 


This image shows the different zones used, along with green points for each point that was taken by the class. This covered a wide array of terrain, from the middle of the footbridge, to points taken off trail in the woods. The points covered a large portion of the UW-Eau Claire campus in order to provide some variability in the following datasets. 

This image shows temperature variations from readings taken at 2m high, across the UWEC campus

This image shows temperature variations from readings taken at surface level, across the UWEC campus

This image shows wind chill trends

This image shows the changes in dew point across the UWEC campus

This image shows variations in wind speed throughout the UWEC campus

This image shows changes in humidity throughout the UWEC campus

Discussion:


When analyzing the above datasets, it becomes apparent that there are certain errors in the data. With different students taking readings across campus, it is inevitable that readings will vary slightly. Sometimes the Kestrel meters need a chance to cool down, or warm up. This can influence how the readings are recorded. Also, if a gust of wind comes at just the right time, the Kestrel will calculate a much lower wind-chill. The main idea here, is that our dataset felt the results of temporal variation- which was not accounted for in the study. This includes during each reading- ideally, one point on the map would include microclimate information from just one point in time when really taking each point took several seconds. During these seconds, conditions were prone to change, resulting in unwanted variation in our data. Perhaps even more importantly, the weather conditions were changing as we continued in our data collection process. Referring to the Temperature maps, when we started at the base of the footbridge, we were getting temperature readings at the top of our domain (60 degrees F), when at the end we were getting down into the 50's and even 40's. The sun went away during the sampling as well.

The temperature interpolated surfaces came back pretty logically, with high readings on lower campus in sunny areas, and colder temperatures back in the shady wooded areas.

The wind chill map has some strange results. There is a severe outlier in the quadrant labelled 2, on upper campus behind governor's hall: It is hard to miss. Either there was an input error here, or someone's Kestrel read an outlandishly low number, but the result is that the entire map is less functional. The IDW tool splits the resulting surface into nine zones, so an outlier like this decreases the interpretability of the map. The majority of the colors shown in the legend are present around the site of the erroneous point, where just two or three are used throughout the rest of the map, representing actual wind chill values.

Dew point also has a couple of outliers, but they don't appear to affect the overall interpolation, probably because of the availability of sampled points in the neighborhood of the erroneous points. Basically, the IDW tool doesn't need to interpolate as much around these points as it did in the wind chill map.

Wind speed yields a logical map, with higher values on the footbridge and along the shore of the Chippewa river. There are a number of random other high values as well, but since wind is not constant, these are most likely attributable to gusts. Also, upper campus seems more consistently windy, as is expected.

The percent humidity map shows that the wooded areas sampled usually had high humidity readings. This is possible because of moisture captured by the trees along with shaded areas maintaining snow cover on the ground.


Conclusion:


This exercise was a good introduction to doing field work in a team setting. The fact that Professor Hupy was absent added an interesting dynamic, as students were required to work together and troubleshoot. It is important to be able to all have the same goal when collecting data, and to do so in standardized fashion. It was important that everyone knew the plan when getting ready to collect field data, and that we knew how to compile it afterwards. Also, This exercise introduced us to collecting microclimate data, which can be used in many different areas of interest to highlight variations in climate with respect to surrounding areas. This lab also demonstrated theses variations for our UWEC campus based on its various physical features.

This exercise coupled with the previous ones, ultimately included creating a geodatabase with proper domains, subtypes, featureclasses and fields, deploying it, collecting data, checking it back in, and analyzing data. I feel comfortable with this process after having done it, and am sure that it will be very useful to be familiar with for future field work. 

Sunday, March 8, 2015

Data Collection I

Introduction:


In the previous assignment, we were assigned to create a geodatabase to suit our microclimate sampling exercise. The purpose of this week's lab was to test the use of GPS with our previously created databases and work out any kinks that we might run in to before next week, when we'll be collecting the microclimate data. We familiarized ourselves with the process of readying a geodatabase for use in ArcPad, deploying it to our GPS units, collecting data, and checking the data back in. We used Juno Trimble GPS units, inputting data from a Kestrel weather meter.

Juno Trimble GPS unit. This unit uses ESRI's ArcPad application, which allows for GPS collection into a geodatabase. In this case, the one from last week's exercise.
Kestrel weather meter, which can be used to read temperature, wind speed, wind chill, dew point, percent humidity and a number of other climate figures.


Methods:


The majority of this lab was done from inside, readying the geodatabase for use in the field. As described in the previous exercise, this step is especially important because it reduces unnecessary work while in the field.

First, I connected the Trimble unit to the computer, and readied my geodatabase for deployment. This involved opening ESRI Arc GIS, enabling the 'ArcPad Data Manager' toolbar, adding a basemap, and including my microclimate point feature class described in the previous exercise. For the basemap, I used a 2013 aerial image of the area, zoomed into the UW-Eau Claire campus. Upon "getting data for ArcPad," only the extent of the area that I was zoomed into would be included. In this step I also checked out the microclimate feature class, and ultimately created a package that is deployable for ArcPad. I copied this package (a file on Windows File Explorer) into my student folder as a copy in case the deployment didn't occur properly. I also copied it onto the Trimble unit's memory card. This made it available for use in the field.

The Get Data for ArcPad tool. This tool essentially takes the feature class and background image shown, checking them out for editing, and creating a package that is compatible with ArcPad. This package is then copied onto the GPS unit, and it will allow for digitization in the microclimate feature class. Play the video below for further information on data deployment. 


We were to go into the field in groups of two, so that we could assist each other in collecting points, but as soon as I got outside I realized I had an issue. My microclimate feature class had no projection defined, so the GPS functionality didn't have a spatial reference for my points. This meant that I couldn't digitize, so I had to go back inside to reasses. I had to use the define projection tool in ArcToolbox on my microclimate feature class, defining as WGS 1984. I figured that this GCS would be compatible with the Trimble Units, because they take points in a GCS as well. By the time I had edited my feature class and went through the above process again, most of my classmates had already completed taking their data. This meant that I had to go out alone and take my own Kestrel temperature, wind speed, wind chill, dew point, and percent humidity readings. I recorded these in my ArcPad session for only three different points on the South side of campus.

The final step was to re-check in our data, using a similar tool as was used to check it out earlier in the exercise. This part worked smoothly, and my points were added back into an ArcMap session.

Discussion:


There were quite a few hitches that the class encountered in this lab, and it is good that we did now, as opposed to having them happen when we do our real data collection in the next exercise. I learned the valuable lesson that for features to be edited in ArcPad, they must have a projection defined. Also, I noticed that my GROUND_COVER field was actually called NOTES. That is an issue because I also have an actual NOTES field as well. This will need to be resolved before next week's data collection exercise. At the end of the exercise this week, we voted on one student's geodatabase as being usable for the rest of the class for further microclimate surveying on campus. This means that all students will be using the same geodatabase with the same domains, basemaps and feature classes, which will minimize discrepancies in the final dataset.


Conclusion:


This exercise was valuable because it allowed us to work out common issues that can occur when using GPS units to digitize data. Knowing how to deal with these issues is a very important skill to have when doing geospatial field work. Also, the exercise included valuable information on how to operate a Kestrel weather meter, and refreshed my memory on operating a GPS unit to digitize point features. Proper deployment of data is also very important, and the class experienced some of the issues associated with it.