north carolina state university | aerial robotics club carolina state university | aerial ... the...

21
North Carolina State University | Aerial Robotics Club MAE Box 7910 911 Oval Drive Raleigh, NC 27695 May 30, 2013 AUVSI Seafarer Chapter Re: NC State Aerial Robotics Club Journal Paper Submission Contest Judges: The North Carolina State University Aerial Robotics Club has prepared an entirely new system to meet the requirements of the 2013 SUAS competition. Fenrir, the team's new aircraft, carries a payload including a Piccolo LT autopilot, IDS machine vision camera, and an x86 Flight PC. The system uses a 2.4GHz link for manual flight control, a 5.8GHz link for imagery downlink, a 900MHz link for autopilot control and telemetry, and an additional 2.4GHz link for SRIC communication. The payload is powered by 2 14.8V, 3300mAh lithium polymer batteries. Fenrir has a 10-foot wingspan, is about 10 feet long, and weighs in at 34 pounds gross weight. It is powered by a gasoline engine and carries 50oz of fuel allowing a flight time upwards of 1 hour. The system has logged over 8.5 hours of flight time and has proven to be both reliable and capable of excelling at the mission requirements of the competition. Enclosed please find our Journal Paper, as required by the competition rules. Sincerely, NCSU ARC

Upload: tranduong

Post on 31-Mar-2018

218 views

Category:

Documents


0 download

TRANSCRIPT

North Carolina State University | Aerial Robotics Club

MAE Box 7910 911 Oval Drive

Raleigh, NC 27695

May 30, 2013 AUVSI Seafarer Chapter Re: NC State Aerial Robotics Club Journal Paper Submission Contest Judges: The North Carolina State University Aerial Robotics Club has prepared an entirely new system to meet the requirements of the 2013 SUAS competition. Fenrir, the team's new aircraft, carries a payload including a Piccolo LT autopilot, IDS machine vision camera, and an x86 Flight PC. The system uses a 2.4GHz link for manual flight control, a 5.8GHz link for imagery downlink, a 900MHz link for autopilot control and telemetry, and an additional 2.4GHz link for SRIC communication. The payload is powered by 2 14.8V, 3300mAh lithium polymer batteries. Fenrir has a 10-foot wingspan, is about 10 feet long, and weighs in at 34 pounds gross weight. It is powered by a gasoline engine and carries 50oz of fuel allowing a flight time upwards of 1 hour. The system has logged over 8.5 hours of flight time and has proven to be both reliable and capable of excelling at the mission requirements of the competition. Enclosed please find our Journal Paper, as required by the competition rules.

Sincerely,

NCSU ARC

NCSU ARC 2013 AUVSI SUAS Journal Paper

North Carolina State University Aerial Robotics Club∗

Department of Mechanical and Aerospace EngineeringNorth Carolina State University, Raleigh, NC 27695-7910

The 2012-2013 academic year was very productive for the NCSU Aerial Robotics Club. The de-velopments made can be measured in leaps and bounds compared to previous years, and were allinnocently precipitated by the desire to upgrade the system’s flight computer. Failing to find space inthe legacy air vehicle, ARCWulf, the team designed and built anentirely new system. Imagery andaircraft teams worked concurrently, and fed off of each other’s enthusiasm and ideas to create thenewly designed system, Fenrir. Fenrir outperforms ARCWulf on all accounts - It has proven capableof fully autonomous takeoff and tight search pattern tracking for efficient search area coverage. Ad-ditionally, the new imagery system provides higher quality images thanthe club has seen before, anddownlinks the images to the ground where the newly developed target recognition software rapidlycompletes automated target detection and provides the user an interface to enter target identificationdata. Extensive flight and ground testing has given the team confidence that the system will excel incompleting nearly all the primary and optional performance parameters for the 2013 competition.

Figure 1: Fenrir system in flight.

I. Systems Engineering ApproachA. Mission Requirements Analysis

Drawing on the success of and knowledge gained from our previous systems, the Aerial Robotics Club (ARC) atNorth Carolina State University (NCSU) has worked hard to develop an entirely new system to meet the missionrequirements of the 2013 AUVSI Student Unmanned Aerial Systems (SUAS) competition. Six Key PerformanceParameters (KPP’s) were set by the Request for Proposal (RFP): Autonomy, Imagery, Target Location, Mission Time,Operational Availability, and In-flight Re-tasking. Each KPP includes Threshold and Objective goals. Thresholdsindicate minimum performance levels that qualify as attainment of their respective KPP’s. Objectives indicate higherlevel performance, exceeding the Threshold requirements.Performing up to the Objective standard will result in ahigher score. Where safety and reliability would allow, the team chose to set the Objectives as its minimum designgoals for the system’s performance. This meant planning fora system capable of autonomous target identification,locating targets within 50 feet, completing the entire mission in a single attempt within 20 minutes, the ability toshift the search area during flight, and autonomous takeoff and landing. Additional optional goals were specified by

∗Email Correspondence: [email protected]

the RFP, including providing real-time actionable intelligence data, connecting to the Simulated Remote InformationCenter (SRIC), and imaging an “off flight path” target. All ofthese options were adopted by the team as secondarygoals.

In addition to meeting the goals specified for the 2013 competition, the team aspired to develop a system that al-lowed for expansion and improvement to meet future mission requirements, while being more user-friendly to operate.On the air vehicle side, this meant making allowances for extra payload space and computing power to allow flighttesting of various other university projects and prototypeequipment. On the ground station, this meant improvementof the system to be more quickly and readily deployable without the need for excessive setup and teardown work tosupport a single mission.

B. Design Rationale

1. Aircraft

During the second half of 2012, the NCSU Aerial Robotics Clubdesigned, developed and built a new airframe tomeet the current mission requirements, allow for future expansion of capabilities, and improve mission performancecompared to the club’s previous airframe, ARCWulf. Over five years of flying and competing with ARCWulf, varioussystems were added to the aircraft. All these additions wereaimed at improving the overall mission performance, butas capabilities were added, unavoidably so was weight. ARCWulf, being a modified version of the popular TelemasterRC model, was not engineered specifically to manage the weight or bulk of these payloads. After an update to theonboard payload computer, it became very clear to the club that mission performance was suffering due to the airvehicle. ARCWulf was operating above its design gross weight, resulting in poor takeoff and climb performance,especially in the tight confines of our practice field. This also limited the aircraft to shallow bank angles (wide turnradius) to maintain altitude resulting in search pattern performance that compromised efficient coverage of the searcharea. Plus, the airframe was simply out of space, excluding the possibility of further systems integration. A newdesign was deemed necessary to replace ARCWulf. This new aircraft would be Fenrir, which was designed to exceedthe performance and load-carrying capabilities of ARCWulfand better meet the current mission requirements. Fenrir’sdesign details will be discussed later under the Air VehicleUAS Design section.

2. Payload

By far, the most important payload component installed in the system is the camera, which helps the team meet theImagery and Target Location KPP’s as well as the additional goal of imaging the off flight path target. This year, theclub decided to switch from our old camera, a Nikon D60 DSLR, to a machine vision camera as the primary flightcamera. The decision to switch was made based on the factors of weight, image quality, and ease of programmaticcontrol.

The full camera, battery, and lens weight of the Nikon D60 is 26.7oz, heavily contributing to ARCWulf’s grossweight and performance problems. Knowing that most of the imagery system testing would be conducted on ARCWulfbefore the new airframe was ready, a smaller, lighter camerawas desired.

While weight was a driving factor in the decision to switch to asmaller camera, the club felt strongly that it mustnot compromise on image quality and, if possible, improve it, as quality is clearly one of the most important factorsin quickly and easily identifying targets. In the past, the club has found that machine vision cameras of sufficientquality were outside of the club’s budget. However, as high resolution cameras become more widespread in industrialapplications, high resolution machine vision cameras havecome down significantly in price. In addition, machinevision cameras can forgo human interface features, such as ascreen and buttons, for reduced weight and size. Thismeans that they can include many of the same high resolution and quality sensors found in DSLR’s, at a fraction ofthe weight.

Another important feature in the club’s search for a new camera was programmatic control. The Nikon D60 is avery powerful camera, but it is intended to be used by a human,not a computer. The Picture Transfer Protocol (PTP)provided a rudimentary featureset for capturing images; however, settings still had to be manually set and verifiedon the camera itself, and, more painfully, the implementation was unreliable. Machine vision cameras, as the namesuggests, are designed from the ground up to be controlled bya computer. These cameras provide APIs allowingaccess to all camera features, allowing fully automated camera control, and thus providing in-flight access to modifythe camera configuration.

2 of 20

North Carolina State University - Aerial Robotics Club

Early in the search for machine vision cameras, it became clear that the APIs for our best camera options only sup-port processors based on the x86 architecture and would not be compatible with our existing ARM-based Pandaboardflight computer. With this knowledge and a desire to move toward the ability to run some image processing onboardthe aircraft, research began into building a lightweight yet capable x86 flight computer.

Software is an essential part of the payload system, and highquality, reliable, and maintainable software was afocus this year. Each component of the system needed to be resilient to failure in other parts of the system, such thatone failing system does not bring others down with it. This isparticularly important for image capture, where a loss ofcommunications must not prevent further capture of images,which could be downloaded and processed upon landing.For maintainability, a central library usable in many different programs prevents code duplication, and makes creatinga new program simple.

3. Autopilot

The Piccolo LT is a high-grade, off-the-shelf autopilot system made by Cloud Cap Technology. The system providesthe capability to meet all of the autopilot-driven KPP’s including the Autonomy and In-flight Re-tasking Thresholdsand Objectives. It also supports attainment of the Imagery and Target Location KPP’s by providing high-fidelity GPSposition and attitude data to the imagery system through itsserial connection to the flight computer. The Piccolo Com-mand Center ground station software meets the safety requirements by displaying the airspace boundaries, airspeed,altitude, and current vehicle position on the autopilot ground control screen. The autopilot also provides the requiredfailsafe and aerodynamic termination capabilities. Drawing on the club’s 7 years of positive experience with this au-topilot and the fact that it meets all of the requirements, the club elected to continue using the Piccolo LT autopilot onthe Fenrir platform.

C. Expected Performance

The club has currently logged 22 total flights on the Fenrir platform and 8.5 hours of total flight time, 76% of whichwas fully autonomous. Much of the imagery and SRIC systems have undergone testing in ARCWulf all year and haveover 15 hours of flight testing. Results of this testing will be discussed later under the Test and Evaluation Resultssection. Based on flight test experience this year, the system is well prepared to meet the mission requirements of the2013 SUAS competition.

The team expects to meet at least 5 Objective KPP’s and at least the Threshold for the remaining KPP. While testinghas demonstrated reliable autonomous navigation and takeoff performance, autonomous landing performance has notreached a satisfactory level as of the time of this paper’s submission. More auto-landing tuning will be conducted andthe team hopes to be ready to perform this at competition. In addition to the primary KPP’s, we expect to meet manyof the ”stretch” objectives outlined in the rules. Tables 1 and 2 show the team’s performance expectations.

KPP Threshold Requirement Objective Requirement

Autonomy Will Meet May Meet (Auto-landing)Imagery Will Meet Will MeetTarget Location Will Meet Will MeetMission Time Will Meet Will MeetOperational Availability Will Meet Will MeetDynamic Retasking Will Meet Will Meet

Table 1: Fenrir expected performance for KPP items.

In testing, the autonomous target recognition software reliably detects targets with a low false positive rate. Threeof the five required target characteristics are currently attempted by the system. These characteristics have close to a50% false positive rate, so the characterization objectivemay not be met. Since the system currently has no mechanismfor differentiating between classification and identification, and not all characteristics are attempted, the system willnot meet the identification objective.

3 of 20

North Carolina State University - Aerial Robotics Club

Secondary Requirement Expectation

Off-Axis Target Will MeetPop-up Target Will MeetAuto Target Detection/Queuing Will MeetAuto Target Classification May MeetAuto Target Identification Will Not MeetElectronic Data Submission Will MeetActionable Intelligence Will MeetSRIC Will MeetSecret Message Will Meet

Table 2: Fenrir expected performance for secondary requirements.

II. UAS DesignA. Air Vehicle

As previously discussed under the Aircraft Design Rationale, the club decided to design and build a brand new aircraft.Fenrir was designed specifically to meet the 2013 mission requirements and allow for future expansion of capabilitiesas mission requirements mature. It is larger than ARCWulf, featuring a 10-foot wingspan and an overall length of over10 feet (including its air-data boom). Empty weight is 19 pounds, and the aircraft was designed to have a maximumgross weight of 45 pounds. Ballasted test flights have been made with satisfactory performance up to the design grossweight. The up to 26 pound allowable payload capacity represents a drastic improvement over ARCWulf’s 8 pounddesign capacity. The new aircraft was intentionally designed with a voluminous payload bay and the ability to carrydramatically heavier payloads than its predecessor. The current payload only weighs 15 pounds, which allows plentyof room for expansion/improvement of systems and adaptation to changing mission requirements that are expected inthe future. The payload bay was designed to be modular, featuring a set of mounting rails and a regular, defined patternof fasteners to which payload modules may mount. Payload modules may be built independent of the aircraft followinga payload module design guide and installed to the rails without modification to the aircraft. A separate module isused for the flight computer, networking and radios, the camera gimbal, the autopilot system, and the payload powersystem. All may be easily removed for maintenance and systemimprovements.

The club chose a pusher-propeller, twin-tailboom configuration for Fenrir. This allowed for a large, highly acces-sible payload bay forward of the wing. The aft-mounted engine with camera in the forward payload prevents exhaustgases and fluids flowing across the camera, keeping the lens clear of residue, reducing obscuration and assisting inmeeting the Imagery KPP. The air-data boom, including pitottube and static ports, is mounted in front of the nose inthe cleanest possible air for best air-data accuracy. The configuration lends itself well to redundant flight controls asithas two rudders. The aircraft uses separate servos for the left and right ailerons and flaps, left and right rudders, andleft and right half of the elevator. The airplane was designed with adequate control power to be flown safely followingthe failure of a single side of any and all of these flight controls, contributing greatly to the safety of operating theaircraft. Finally, this configuration leaves the engine andpropeller guarded by the tail section, drastically reducingthe possibility of a team member accidentally coming into contact with the rotating propeller. This makes the aircraftmuch safer to work around on the flight line.

As an extra line of redundancy, power to Fenrir’s flight control servos is provided by two separate lithium polymerbatteries. A circuit between the batteries and servo power bus handles parallel loading of the batteries. When thevoltage of the two batteries matches, both are drawn from in parallel, splitting the load and depletion evenly. In theevent of a battery voltage mismatch, only the higher-voltage battery is used. In the event of a totally dead or short-circuited battery, that battery is isolated from the systemand safe control can continue from the remaining battery.This ensures that a single battery failure can not result in aloss of safe control of the aircraft.

It was also desired to achieve a high flight endurance. ARCWulf was limited to 20 minutes of flight time, which,while adequate to meet the mission time Objective KPP, was a limiting factor during flight testing and did not allowmuch margin for mission changes. Fenrir was designed with enough fuel and battery capacity for approximately 1hour of loiter time. This would allow for more goals to be met during each flight test and the possibility of longermission times, should an emergent target require a longer period of surveillance than anticipated.

An additional goal was to develop an aircraft with a high maneuverability envelope to allow for tight waypoint

4 of 20

North Carolina State University - Aerial Robotics Club

navigation tracking and dense search pattern pathing. ARCWulf ’s maximum safe bank angle was set at 25 degrees,allowing a minimum theoretical turn radius of 246 ft. Fenrirwas designed with adequate structures and flight controlsto safely execute turns at 75+ degrees of bank, yielding a theoretical turn radius of just 31 feet. This would allow formore dense coverage of the search area, assisting in better attainment of the Imagery and Target Location KPP’s. Theaircraft was also designed with an adequately high power-to-weight ratio to allow for steep (45 degree) climbs, andwith flaps to allow steep yet controlled descents, contributing to better attainment of the Autonomy Threshold KPP.

B. Data Link

The system uses a variety of radio frequencies for air-ground communications between subsystems. For manualaircraft control, a 2.4GHz Frequency-Hopping Spread Spectrum (FHSS) system is used, sending control inputs fromthe external pilot’s controller to a receiver onboard the aircraft. This system is capable of operating in the sameenvironment as other common 2.4GHz radio-control systems,wireless networks, video and other devices withoutbeing negatively impacted by interference.

The autopilot uses a two-way 900MHz link to pass commands andtelemetry between the air vehicle and theground station. This link is handled by the Piccolo’s internal radio, based on the Xbee Xtend 1-watt radio. Via thislink, autopilot operators are able to task and dynamically re-task the aircraft at any time during flight as well as monitorreal-time telemetry from the aircraft. Manual flight controls can also be passed over this link via the autopilot groundstation’s manual flight control console.

The payload uses a two-way 5.8GHz link primarily to pass images from the aircraft to the imagery interface. Thisis accomplished using an Ubiquiti M5HP bullet on both the aircraft and the ground station. This provides 50-100Mbpsof throughput for imagery downlink. An additional 2.4GHz Ubiquiti M2HP bullet is contained in the aircraft payloadwhich operates as the SRIC link. This link is capable of connecting with the SRIC router and has been tested under avariety of conditions, discussed later in the SRIC Design Section.

For communication between personnel, a 462MHz FRS (Family Radio Service) VHF radio system is used. Thisis a band used by readily available 2-way radios and offers several miles of line-of-sight range. This link allows theground control trailer personnel to communicate to external personnel such as the external pilot, flight test director andobservers.

To enhance system safety, the data link architecture has been designed such that, in the event of a comms emer-gency, manual aircraft commands, autopilot commands, and telemetry data can all be passed over multiple channels.The 2.4GHz manual aircraft control link, the 900MHz link, and the 5.8GHz link are all capable of passing commandsfrom the safety pilot to fly the aircraft manually. The 900MHzsystem serves as the primary link for autopilot com-mand, control and telemetry; however, it is also possible tocontinue control of autonomous flight while receivingtelemetry and passing commands via the 5.8GHz link. This arrangement makes the links for manual flight controldoubly redundant and autonomous flight control redundant, greatly enhancing system safety.

C. Ground Control Station

To provide a sterile work environment for the UAS operators and a means of transportation of systems to the flightline,the team uses a dedicated Ground Control Trailer (GCT). This14-foot enclosed trailer provides workstations for 2autopilot operators and 2 imagery operators, and contains the infrastructure necessary to support operation of theaircraft. The GCT contains permanently installed central AC and DC power systems, a dedicated Local Area Network(LAN), and all ground-based antennas. The GCT is equipped with an auto-tracking antenna mount that keeps the patchantenna for the 5.8GHz imagery link directed towards the aircraft at all times. The tracker controller and the 5.8GHzUbiquiti M5HP Bullet both interface directly to the GCT LAN.The tracking and other antennas are weatherproof andpermanently mounted to the roof of the trailer to both allow aclear view of the aircraft and reduce the setup time ofthe ground station.

The GCT has a permanently mounted system of intercoms and FRSVHF radios to facilitate communicationbetween personnel. A 2-place intercom and accompanying headsets at both the Autopilot and Imagery control stationsallow 2 operators at each station to converse privately. Each operator is then able to transmit to the rest of the teamover the VHF radio via push-to-talk switches at each station. Operators outside the GCT carry personal VHF radios.This system allows operators to focus on their primary taskswhile also providing for more effective communicationbetween personnel. It also facilitates communication if the mission requires the flight line crew and external pilot tooperate in a remote location from the GCT for any reason. The need for such a system was made apparent during

5 of 20

North Carolina State University - Aerial Robotics Club

demonstration of the ARCWulf system at the 2012 SUAS competition, when a communication breakdown due toremote operations cost the team significant time and missionsuccess.

While not permanently installed, the GCT allows operators toset up all necessary ground station computers andadditional hardware in the time before an impending mission. These systems can remain in place during transit tothe flightline, further reducing the required setup time immediately prior to the mission. External systems such asthe GCT’s AC generator and a Ground Power Unit (GPU) for the aircraftare staged for immediate deployment uponarrival at the flightline. Thanks to this high level of systemreadiness, setup times from arrival at flightline to comms-on readiness average under 5 minutes, contributing to reliable attainment of the Operational Availability and MissionTime KPP’s.

1. Autopilot Interface

Figure 2: The Piccolo Command Center’sPrimary Flight Display

The autopilot ground control station is contained within the GCT. Itconsists of two operator stations, the autopilot ground station radioconsole and manual control console, and one or two laptop comput-ers running the Piccolo Command Center (PCC) software. PCC pro-vides an intuitive autopilot control and telemetry interface. A PrimaryFlight Display (PFD), shown in Figure 2, shows the operator the air-craft’s attitude, airspeed, altitude, heading, and waypoint information.A moving map display, shown in Figure 3, shows the aircraft’scurrentposition, all flight plans, airspace boundaries, and a satellite image ofthe ground. Flight plans and boundaries can be drawn and modifieddirectly on this display. Multiple other windows can be configured tomonitor all telemetry parameters received from the aircraft, as well asadjust the autopilot’s configuration including the flight control gains.

Primary and secondary autopilot operators are responsiblefor sep-arate tasks during flight. The primary operator is responsible for oper-ation of the autopilot while the aircraft is under autonomous control.Such operations include launching, waypoint and search area navi-gation, and landing. For missions not requiring dynamic retasking,the secondary operator monitors aircraft systems. For missions wheredynamic retasking is a requirement, the secondary operatoris also re-sponsible for updating flight plans and airspace boundaries. This di-vision of duties allows for more efficient modifications to the flightplan while maintaining a high level of situational awareness by theoperators, a critical factor in safe UAS operations.

Figure 3: PCC’s moving map display, showing airspace boundaries, flight plans and the aircraft’s position.

6 of 20

North Carolina State University - Aerial Robotics Club

2. Imagery Interface

The primary imagery interface is provided through the imagery ground computer. This computer serves as the finalstorage point of all images and performs all image analysis.The majority of the imagery system is controlled througha series of command line programs that are monitored in a terminal. Flight PC programs for image capture and transferare monitored remotely on the imagery ground computer via anSSH (Secure SHell) session.

A heavily modified version of the Mirage image viewer is used for image viewing, manual targeting, and monitor-ing of automatic targeting. The standard view can be seen in Figure 4. Mirage receives images for display from a cen-tral database, which is continually updated as new images are taken.

Figure 4: The modified Mirage imagery viewer,showing automatically- and manually-detectedtargets.

Additionally, all marked targets for that flight will be displayedon the image, regardless of whether or not they were originallymarked from that image. Manual targets are marked in green andautomatic targets in red. This works by querying the databasefor all targets within the coordinates the currently-displayed im-age covers. The advantages of displaying all targets are two-fold.For manual targeting, the computed target location from multipleimages can be averaged together to help reduce GPS error thatmay be present in a single image. For automatic targeting, ital-lows for easy inspection of target candidates, without the needto find the original image in which the target was detected. Alltarget markers can be clicked on to get more information, includ-ing location and target characteristics, as shown in Figure5. Formanual targeting, clicking an empty area of the image allowstheaddition of a new target at that point, or averaging of that pointinto an existing target. Automatic target recognition, characteri-zation, and identification is only monitored through Mirage, butruns as a separate program, which is described later in the DataProcessing section.

Figure 5: Target information displayed via the Mirage interface.

3. External Pilot Interface

Outside the trailer, the external safety pilot uses a Futaba8FG transmitter for manual aircraft control. This transmitterutilizes a robust 2.4GHz FHSS protocol to communicate with the aircraft. A toggle switch on the transmitter switchescontrol of the aircraft between autopilot and manual control via a multiplexer onboard the aircraft. The externalpilot also has a VHF radio with an earbud-type headset, allowing for communication with other personnel whilestill permitting him to hear the aircraft for aural monitoring of engine performance. A push-to-talk switch on thetransmitter allows the pilot to transmit over the radio without hampering his ability to manually control the airplane.In the unlikely event of a failure of the external pilot’s primary transmitter, a failsafe system automatically switches

7 of 20

North Carolina State University - Aerial Robotics Club

control back to the autopilot system. The external pilot canthen utilize the autopilot ground station’s manual controlconsole to manually pilot the aircraft via either the primary 900MHz autopilot link or the 5.8GHz payload data link.These three systems for manual flight control provide doubleredundancy and a high level of safety. During flighttesting, the primary 2.4GHz manual control link has never been lost, but the primary backup systems are tested at eachflight test.

D. Payload

1. Autopilot System

The autopilot system onboard the aircraft consists of a Piccolo LT and a variety of peripherals aimed at improvingoverall system effectiveness. The basics include a GPS antenna and air-data boom, however a high-accuracy mag-netometer and laser altimeter have also been added to the system to further increase performance. A servo signalmultiplexer also serves as a safety switch between manual and autonomous control.

The Piccolo LT autopilot is mounted on the autopilot payloadmodule, central to the fuselage, on a vibration-damping foam mount designed to reduce engine vibration reaching the unit and allow cleaner gyro sensor data. A3.5in Antcom L1 Active GPS antenna is installed on top of the autopilot module, just under the top skin of thefuselage. This is used by the autopilot for 3D position (latitude, longitude and altitude) and time data. This positiondata is also transmitted to the imagery system to determine the location of each captured image. An air-data boom,including a pitot tube and static ports, extends from the nose of the aircraft. This provides basic static and total pressuremeasurements to the autopilot, allowing for barometric altitude and airspeed measurement.

In the nose of the aircraft, away from electromagnetic interference, a Honeywell HMR2300 magnetometer isinstalled, giving the autopilot three-dimensional magnetic field data. This is used to determine the aircraft’s magneticheading, aiding in wind estimation and improving navigation performance. This heading is also used in the event of aloss of GPS lock for dead-reckoning navigation until a GPS lock can be reacquired. Further, magnetic heading is usedby the imagery system to determine the orientation of each captured image.

Last but not least, a LaserTech TruSense S-200 laser rangefinder is installed in the aft fuselage, looking downwardthrough the lower fuselage skin. Coupled with a Moster Aerospace PTD-A011 Peripheral Translator connecting tothe autopilot’s CAN bus, this rangefinder is used as a laser altimeter, yielding highly accurate AGL (Above GroundLevel) altitude data. AGL data is very important to both autonomous landing performance and the imagery system, inwhich the AGL altitude is used to determine what area of the ground covered by each image.

In the aft section of the fuselage lies an Acroname RxMux. This multiplexer circuit allows switching between twosets of Pulse Width Modulation (PWM) servo signal inputs to a single set of outputs. An R/C receiver, coupled to theexternal pilot’s ground station controller, provides the primary input to the multiplexer. The secondary input takes inthe autopilot’s servo command PWM outputs. The aircraft’s control system servos are connected to the multiplexer’soutput. When the external pilot clicks a switch to hand control of the aircraft to the autopilot, the multiplexer switchesfrom the R/C receiver input to the autopilot input, allowingautonomous control. In the event of any autopilot problems,up to and including a full autopilot hardware failure, the multiplexer can switch control back to the R/C receiver andthe airplane can remain under safe manual control. This system contributes greatly to the overall operational safety ofthe system.

As Fenrir was a brand new design, there was no pre-existing Piccolo software configuration that provided anacceptable starting point. An Athena Vortex Lattice (AVL) model, used in the stability and control analysis of theaircraft, was used to generate stability derivatives to build a Flightgear simulation model. This was used for Software-in-the-Loop simulations to set the initial flight control gains, and later Hardware-in-the-Loop simulations to furtherrefine those gains. After achieving satisfactory simulation results, the configuration was flight tested on the actualaircraft. Some minor tuning was still required, but the finalcontrol gains were extremely close to those determinedthrough simulation. Results of that flight testing will be discussed later.

From the Piccolo Command Center, mission limits can be set within the Piccolo autopilot that help meet severalof the safety requirements. The system is configured to automatically activate aerodynamic termination followinga 3 minute loss of autopilot communications. Coupled with failsafe settings in the external pilot’s controller thatpass control to the autopilot in the event of failure of that link, the start of the 3-minute countdown to aerodynamictermination is assured in the event that both primary aircraft control links are lost. In the event that the autopilotappears to lose control or proceeds to fly the aircraft beyondthe airspace boundary limits, control will be passedback to the external pilot, who may manually assert flight termination should the need arise. Per the requirements,aerodynamic termination is enacted through a specified set of control inputs: closed throttle, full up elevator, full right

8 of 20

North Carolina State University - Aerial Robotics Club

rudder, full right aileron, and full flaps down. During initial flight testing of Fenrir, this set of control inputs wastested and confirmed to result in a stable spin mode, resulting in a minimum-energy condition and the safest possibletermination of the aircraft.

During flight testing, it was determined that the time required to attain a GPS lock after system startup was in-consistent, ranging from a matter of seconds to tens of minutes. This inconsistency would hinder our ability to meetthe Threshold Mission Time KPP of mission completion within30 minutes, much less the Objective of 20 minutes,and even endanger attainment of either Operational Availability KPP. In order to better assure mission readiness, aprocedure has been put in place to power on the autopilot withits 900MHz radio disabled. Ground power and ahard-wired ethernet connection to the plane allow the system to be started and a GPS lock acquired during initial setuptime, saving precious time after the official mission start.Following comms-on clearance, the hardwired connectionis used to re-enable the autopilot’s 900MHz radio and the normal wireless link is then established. Circuitry onboardthe aircraft allows for seamless transition from GPU to battery power shortly before engine start, preventing a systemreboot and likely loss of GPS lock.

2. Imagery System

The imagery payload onboard Fenrir consists primarily of a camera, flight computer, camera gimbal, 5.8GHz radio,ethernet switch, and data contribution from the autopilot.The flight computer is used for control of the camera, imagecapture, onboard image storage, and imagery downlinking tothe ground. The flight computer interfaces to the cameraand 5.8GHz Ubiquiti Bullet through a Gigabit ethernet switch. The flight computer also uses an RS-232 link to theautopilot to receive telemetry data. This data is packaged with the images at the time of capture for storage anddownlinking to the ground. Autopilot attitude data is also used to control the camera gimbal. An overview of thesystem is shown in Figure 6.

Flight Computer

GigabitSwitch

Gimbal

Camera

Gimbal Controller

5.8GHzBullet

EthernetUSBServo PWM

Autopilot

RS-232

Imagery System

Figure 6: An overview of the major components of the imagery system.

As previously discussed, the club desired to move away from our old DSLR camera to a machine vision camera.After much research, an IDS UI-549SE Machine Vision camera was chosen as the primary flight camera. The fullweight of the IDS UI-549SE+lens is only 7.8oz, a dramatic improvement from the over 2-pound Nikon D60. Thisreduction in weight was very helpful for testing in ARCWulf,which was overweight with the old camera.

Though the same resolution as the Nikon D60, the IDS UI-549SEprovides superior image quality. The lens chosento go with the camera, a Kowa LM5JC10M, has a significantly higher resolving power of 200 line pairs/mm vs 50line pairs/mm of the lens used with the Nikon D60. Additionally, the lens selected only degrades to 160 line pairs/mm

9 of 20

North Carolina State University - Aerial Robotics Club

at the corners. Test results confirming image quality increases are discussed in the Test and Evaluation Results section.As desired, programmatic control saw a drastic improvementwith the new camera. The IDS camera provides a

full API with access to all camera features. The club has written a Python module that exposes these features to therest of the imagery system, providing a simple and, more importantly, reliable interface to the camera. All aspects ofthe camera’s configuration are set through the controlling computer, meaning that settings can be changed in flight,should the need arise.

As early research indicated, the API for the IDS camera we chose does not support our previous ARM-based flightcomputer. A new miniITX form-factor x86 flight computer was built. This system provides several benefits in additionto simply being compatible with our camera. It includes multiple RS-232 COM ports which can be used to interfaceto multiple peripherals, like the autopilot, without the need for additional USB-to-RS-232 adapters. It also providesconsiderably more computing power; it utilizes a dual-core2.6GHz Intel Core i3 processor, compared to the dual-core1GHz Pandaboard. It is also equipped with a 256GB Solid StateDrive (SSD), a major upgrade over the 64GB SSDon the previous flight computer. With the IDS camera able to capture up to 6 images/second, additional storage spaceonboard the aircraft was a necessary improvement.

Onboard Fenrir, the IDS camera is mounted in a custom-designed 2-axis gimbal. In nadir mode, used whilecapturing imagery during normal search operations, the gimbal receives pitch and roll data from the autopilot via theflight computer and maintains the camera pointed toward the nadir. The gimbal has a range of motion of 40 degrees inroll and 35 degrees in pitch, allowing nadir-centered pictures at any aircraft attitudes within this range. Telemetry fedto the imagery system from the gimbal controller indicates whether or not the gimbal was properly aligned to the nadirwhen each image was captured, allowing rejection of misaligned images from target positioning. Coupled with thecamera’s Field Of View (FOV), the gimbal’s range of motion also allows the camera to be positioned to capture targetsup to 72 degrees off nadir below the aircraft to capture imagery of targets not captured in the nadir FOV. This enablesthe system to meet the stretch goal of off flight path target identification. The gimbal also provides for an autonomousself-calibration routine through use of an onboard accelerometer. Following initial setup or maintenance on the gimbalassembly, a self-calibration routine is run on the ground during which the gimbal slowly sweeps, recording truegimbal pitch/roll data vs servo command and maps its controltable accordingly. This allows for a high level of gimbalaccuracy without the need for time-consuming and error-prone manual calibration. High gimbal accuracy plays a keyrole in meeting the Objective goal for the Target Location KPP; just 1 degree of gimbal misalignment in pitch and rollcan result in a 7.4-foot target location error from an altitude of 300ft AGL.

The software design is split into several discrete parts in order to achieve the design goals of maintainability andreliability.

PostGIS enabled PostgreSQL databaseThe database is the heart of all information in the system. This databasestores information about each flight, each image taken, and each target found. Since it is PostGIS enabled,location information is a first class type. Each image is stored along with a polygon geometry describingthe area it covers, and each target is stored with a point geometry describing its location. By storing locationinformation as a primary type in the database, very powerfulqueries on the data can be performed. For example,the image viewer is able to query for all targets contained within the current image, which returns a list of targetsthat requires no further processing. Furthermore, this database acts as a primary source of image information.That is, client programs need only query the database to find new images. They need no connection to the imagecapture or downlink systems.

Core libraries These libraries provide a single API for commonly accessed features. These include a flight classfor performing database actions on a flight level, such as getting a list of all images, or inserting a target. Animage class wraps images captured by the system, providing easy access to metadata embedded within them,as well as convenience functions, such as converting X,Y coordinates in an image to Lat,Lon coordinates onEarth. Autopilot and camera classes provide a common interface to those devices. With these core libraries, theremainder of the system components can focus on their task without needing to implement common features.

Telemetry daemon This is a simple wrapper around the autopilot class. It buffers autopilot telemetry, allowing otherprograms to request telemetry from a specific time. This ensures images get tagged with precise telemetry, evenif they arrive from the camera several seconds late.

Gimbal control Receives telemetry from the telemetry daemon, and commandsthe gimbal to remain nadired. Thegimbal will report nadiring status, which is buffered by this program, allowing nadir status at a specific time tobe queried. The nadir status information will be used to establish trust that the image was taken level.

10 of 20

North Carolina State University - Aerial Robotics Club

Image capture Due to the modular design of the system, this program simply stitches together multiple components;using the camera class to capture images, embedding received telemetry and gimbal nadir information in images,and inserting into the database if specified. This program provides a configurable interface to the operator,allowing several camera settings to be specified, as well as several options for saving, such as simply saving todisk, inserting into the database, or passing to the downlinker.

Image downlink This simple program separates the downloading process fromimage capture, ensuring that imagecapture is not dependent on the network.

Image viewer The human interface to the images captured. The core libraries are used to provide a visual interfaceto the information available about a given flight, as discussed previously in the Imagery Interface section.

Autonomous target recognition Uses the core libraries to acquire images to analyze, and detects and characterizestargets within them. The methods used are further discussedin the Data Processing section.

3. Simulated Remote Information Center (SRIC)

The club was pleased with its performance successfully gathering the SRIC information in 2012, but desired to improveits integration with the remainder of the system. Building on last year’s system, a 2.4GHz Bullet M2HP is used as theprimary SRIC communication router. The Bullet is configuredto connect to two different networks. On the wirelessside, it connects to the SRIC ground router as a client, setting the provided static IP address on the SRIC subnet. Onthe ethernet side, the Bullet is connected to the aircraft network, with a known IP address. The Bullet will act asa gateway to the SRIC subnet by forwarding on all traffic boundfor the SRIC network, but received on the ethernetnetwork. The flight PC is configured to use the Bullet as its gateway to the SRIC network, so all requests are forwardedto the Bullet, and in turn the SRIC network. In flyby testing, where the aircraft was flown at 300ft AGL about 150fteast of the SRIC system at its closest point, the system was able to connect and download about 4MB of data beforelosing its connection. While this is a small amount of data, itis more than sufficient for downloading small text files.By only doing a flyby of the SRIC location, more time can be spent covering the search area. In the event that a flybyis not sufficient, the aircraft can perform a 300ft orbit around the SRIC location at 300ft AGL, which is sufficient tomaintain a constant connection with the SRIC router.

E. Mission Planning

The goal of mission planning is to set up the system to accomplish as many of the KPPs as possible in the allottedmission time. Most mission planning is accomplished beforea mission begins, although it is possible to dynamicallyre-task the system at any time to meet the KPP for re-tasking.Prior to mission set-up, flight plans are created forautonomous takeoff, for search area coverage, and for autonomous landing. Autonomous takeoff flight plans are basedon which takeoff direction is favorable for the expected wind conditions; however, this can be changed prior to themission as necessary. A flight plan is generated to take the aircraft from takeoff to the start of the waypoint navigationpath, through the waypoint path and to the start of the searcharea. The search area pattern typically consists of flightpaths in parallel rows that are connected with 180 degree turns. The turns are all performed in the same direction sothat the aircraft progresses from one side of the search areato the other. After the aircraft traverses from one sideof the search area to the other, it enters another series of parallel row search patterns that are perpendicular to theprevious set. The combination of search pattern directionsprovides efficient and complete coverage of the search area.After coverage of the search area is complete, the autopilotoperator instructs the aircraft to return to the search areaentry/exit point and then to enter into the landing pattern.Autonomous landing patterns are defined in the missionplanning and selected at mission time depending on existingwind conditions.

In the event that dynamic restasking is necessary, the secondary autopilot operator manages adjustments to theairspace boundaries and flight plan while the primary operator focuses on the aircraft, monitoring its navigation per-formance and guiding it between mission stages. Once the secondary operator configures and sends the updatedboundaries and flight plans to the aircraft, the primary operator may activate them as necessary.

F. Data Processing

There are two different paths for data processing once images have been downloaded, manual and automatic targetdetection, classification, and identification.

11 of 20

North Carolina State University - Aerial Robotics Club

Manual targeting is performed using the custom Mirage imageviewer, as described in the Imagery Interfacesection. Targets are marked in the viewer, which automatically computes their location based on the image telemetry,and are inserted into the database with the characteristicsprovided by the operator. One operator is dedicated toperforming manual targeting, as well as finding the off-axisand pop-up targets. Once all targets have been identified,they can be quickly exported to the format specified by the RFP, using a single script.

Automatic target recognition, characterization, and identification run as a separate program on the ground. Duringa mission, it is configured to connect to the database, and will process each new image inserted, placing the resultsback into the database. An overview of the process used to detect and characterize targets can be found in the followingsection, starting at step four.

1. Method of Autonomy

1. Capture image

2. Queued for download

3. Downloaded to ground payload PC, inserted into database

4. Target recognition software begins analyzing new image in database

5. Image searched for contrasting blobs in the hue channel ofthe Hue, Saturation, and Luminance (HSL) colorspace

6. Blob area and dimensions checked against RFP requirements→ Target candidates

7. Inside of target candidate checked for another contrasting blob→ Letter candidates

8. If found, many rotations of letter candidate run through Optical Character Recognition (OCR)

9. If OCR finds letters, highest confidence rotation used→ Letter

10. Target orientation derived from letter rotation and image heading→ Orientation

11. If letter valid, consider valid target→ Target

12. Target distance from centroid to edge measured at each angle→ Shape signature

13. Signature aligned with reference shapes and total square error between reference and signature taken. Leasterror→ Shape

14. (Currently incomplete)→ Target color

15. (Currently incomplete)→ Letter color

The automatic target detection and classification system fulfills the automatic detection and automatic character-ization objectives specified in the RFP. The system is capable of detecting targets with a low false positive rate, aswell as classifying letter, orientation, and shape. At the time of this paper’s submission, target and letter color areincomplete. As not all characteristics are characterized,and our testing shows that classification has a higher falsepositive rate than detection, the system will not meet the autonomous identification objective.

Each characteristic requires a different method of analysis to determine, which are summarized below.

Target detection Targets are detected by searching for contiguous, contrasting blobs in the hue channel of the originalimage. These blobs are compared against the RFP requirements for target size and total area, using the imagealtitude metadata. Only targets that meet the requirementsare kept as target candidates. Only target candidateslater found to contain a letter are considered valid targets.

Target location The target location is simply taken at the centroid of the target blob. The target X,Y location inthe image is converted into a global coordinate by projecting the image WGS84 coordinate into the UniverseMercator Projection (UTM), computing an offset in meters based on the image capture altitude, and projectingback into WGS84.

12 of 20

North Carolina State University - Aerial Robotics Club

Letter detection Letter detection in the target is performed similarly to target detection. Contrasting blobs are foundwithin the bounds of the target blob.

Letter recognition and orientation The Tesseract Optical Character Recognition (OCR) libraryis used for letterrecognition. Since OCR is not rotation invariant, the letter blob is fed in to OCR in an array of differentrotations. The letter returned with the highest confidence is taken as the target letter. The orientation is derivedfrom the image heading and rotation used for the letter selected.

Shape From the centroid of the target, the outline of the shape is traced, and the distance from the centroid is recordedwith respect to rotation, resulting in a 1D signal of distance from center with respect to rotation angle. This signalis then normalized and cross-correlation is performed against a large database of template shape signals. Themaximum of the cross-correlation between the shape and the template gives the angular offset needed to alignthe shape to the template, giving our algorithm rotation invariance. Once aligned to each template as well aspossible, the sum of squared error is computed between the aligned signal and the template signal, and these areranked smallest to largest. The template that results in thesmallest squared error is the best match, and mostlikely the correct shape.

III. Test and Evaluation ResultsA. Aircraft

Before payload integration, Fenrir was subjected to a rigorous series of flight tests. These included initial stabilitychecks; stall tests in power on, power off and all flap configurations; airspeed envelope expansion; flutter testing, andmaneuverability testing. After passing all of these with satisfactory performance, the airplane was ballasted up toits design maximum gross weight and the tests repeated. All tests were passed successfully so the full payload wasinstalled and mission performance testing began.

At the current gross weight of 34 pounds and on a standard day at 350 ft MSL, Fenrir’s minimum takeoff distanceover a 50-foot obstacle is 250 feet and its maximum rate-of-climb is approximately 2000 feet/minute. As shown byFigure 7, this is a major improvement over ARCWulf’s 650 foottakeoff distance and 200 feet/minute climbrate.

Figure 7: Takeoff/initial climb profiles for Fenrir and ARCWulf. Distances shown are from start of ground roll to 50ftAGL.

These improvements help make flight operations at our relatively small flight test facility safer and help us betterachieve large altitude changes between waypoints, aiding in our meeting of both the Threshold and Objective Auton-omy KPP’s. Currently, Fenrir’s autopilot bank limits are set at 45 degrees, allowing a turn radius of 115 feet. Usingsimple 180-degree turnarounds, this allows us to set our search-pattern straight path segments at 230 feet apart, lessthan half of ARCWulf’s minimum path spacing, as shown in Figure 8.

13 of 20

North Carolina State University - Aerial Robotics Club

Figure 8: Minimum path spacing for ARCWulf and Fenrir platforms in a typical search pattern.

This provides for excellent image overlap while flying at lowaltitudes, improving our performance at meeting theImagery and Target Location KPP’s. Although the airplane is capable of safe 75+ degree banked turns, it was deter-mined during testing that the Piccolo controller does not perform reliably well at bank angles exceeding 45 degrees,as altitude tracking suffers beyond this point. With futureplans for autopilot upgrades, we hope to be able to increasethe attitude limits and take better advantage of Fenrir’s expanded maneuverability.

Testing also validated Fenrir’s 1-hour loiter endurance. Accounting for time for takeoff and landing, this allowsfor over 3 times the effective mission time, letting us complete goals more efficiently during flight testing without thedisruption of stopping to refuel. The initial autopilot tuning was accomplished within just the first 4 flights on theairplane.

B. Autopilot

As discussed previously, autopilot integration into Fenrir, an entirely new design, required determination of a full setof controller gains for accurate and reliable path trackingin autonomous flight. Gains determined through Software-and Hardware-in-the-Loop simulations provided a safe starting point, but flight testing was necessary to refine themfor improved tracking performance. Lateral controller gains adjust how aggressively the autopilot attempts to trackwaypoints as well as response to flight path disturbances caused by wind. Accurate path tracking is crucial not only tothe Autonomy KPP, but also to the Imagery KPP to ensure sufficient coverage of the target search area.

Below, the left side of Figure 9 shows an initial tuning flightusing gains determined through SiL and HiL simula-tions. Fenrir’s track is indicated by a series of dashed bluelines while programmed waypoint path is shown in green.At the start of the recorded track, tracking oscillations can be seen on the leg from waypoint 45 to 46. The turn fromwaypoints 46 to 47 was overly aggressive, followed by an over-correction resulting in the airplane entirely missingwaypoint 47. After stabilizing on the path to waypoint 48, the autopilot began the pre-turn too early, leading to furtheroscillatory corrections near the waypoint path. In-flight tuning to the lateral controller gains was performed to improvewaypoint path tracking and the Piccolo’s recovery from wind-induced path deviations.

14 of 20

North Carolina State University - Aerial Robotics Club

Figure 9: Tracking performance before (left) and after (right) autopilot lateral gain tuning.

Once an effective set of controller gains was reached, Fenrir’s tracking performance was considerably improved, asevidenced by the right side of Figure 9. The recorded track shown indicates the autopilot’s ability to tightly track theprogrammed path. This guarantees full image coverage of thesearch area, reliable navigation within confined airspaceboundaries, and safe autonomous operation of the aircraft.

During flight testing, AGL altitude readings from the laser altimeter were compared to normalized barometricaltitude. Data shows that laser AGL readings are negativelyimpacted when the aircraft flies over a dense grouping oftrees, like those surrounding our practice field. The laser receives the highest return from the canopy of trees insteadthe ground, yielding variances in AGL accuracy as shown in Figure 10. However, over flat ground, the laser altimeterdata correlates closely with barometric data, providing extremely accurate AGL altitude information. This validatesthe laser altimeter’s usefulness in the search area and nearthe runway. With the laser altimeter’s ability to accuratelyestimate altitude within 10 cm, it remains a valuable asset to autonomous landing performance and imagery operations.

17 18 19 20 21 22 230

100

200

300

400

500

600

Time (min.)

Alti

tude

(ft)

Normalized Barometric AltitudeLaser AGL Altitude

Figure 10: Laser and Barometric AGL altitudes plotted vs time.

15 of 20

North Carolina State University - Aerial Robotics Club

C. Imagery

During the Fenrir build process, the IDS UI-549SE machine vision camera was integrated into the old ARCWulfplatform, allowing extensive testing to be done well beforethe new aircraft was complete. Over the course of the year,over 20000 images have been captured during flight. Image resolution and quality were verified through a series ofsubjective and objective tests. All test flights with the camera onboard were flown with targets placed in the field. Theimages captured were subjectively compared for sharpness and clarity on the targets. Comparisons were made withprevious flights and image settings, as well as center versusedge locations in the image and various altitudes, wheresignificant improvements were noted. This subjective comparison offered a quick glance at image quality, and allowedrough tuning to be done on the fly. Diagnosing and fixing problems such as focus and exposure issues could be donequickly via this method.

Figures 11 and 12 show an equal altitude comparison of the IDSimage resolution at the center versus at the corners.The resolution of both cameras is comparable at the center, yet vastly different at the edges of the image. While theNikon D60 shows a significant reduction in image quality nearthe edges, the IDS camera loses very little quality.This improvement in image quality makes more of the image usable for finding targets, which is especially critical fortargets that may be near the edge of the search area.

Figure 11: Target images from the Nikon D60 DSLR camera, taken from the center (left) and edge (right) of imagestaken from 250ft AGL.

16 of 20

North Carolina State University - Aerial Robotics Club

Figure 12: Target images from the IDS UI-549SE camera, takenfrom the center (left) and edge (right) of images takenfrom 250ft AGL.

In addition to subjective tests, a resolution target was used to objectively measure image resolution and quality.The club built a resolution target based on the 1951 US Air Force Resolution Target variant used at Webster Field.This target is composed of sets of three parallel lines, equal in width and spacing, where each set is of decreasing size.The smallest set where each line can still be distinguished in a given image can be used as a measure of the imageresolution. This target is used on the ground before each flight test to ensure that the camera is optimally focused atinfinity. The target is placed in the field to provide an objective measure of image resolution as the camera is tuned.In-flight images of the resolution target proved the consistent resolving power between the center and edges of the IDScamera’s FOV seen in earlier subjective tests. An example can be seen below in Figure 13.

Figure 13: Resolution target images from the IDS UI-549SE camera, taken from the center (left) and extreme edge(right) of images taken from 150ft AGL.

17 of 20

North Carolina State University - Aerial Robotics Club

Automatic target detection and classification underwent much development and testing throughout the year. Duringeach flight test, automatic target recognition is run, testing recent improvements on a new set of data. Recent versionshave a false detection rate of about 30%, while characterization has a false positive rate of about 50%. A very importanttest case is previous flights at Webster Field. Despite changing cameras and telemetry formats, the framework providedby the core libraries makes it transparent to run target recognition on older image sets. On image sets from WebsterField from 2011 and 2012, the system was able to maintain the same success rates as above.

IV. SafetyMany of the countless safety features and considerations ofthe system have been discussed throughout this document.However, certain additional special procedures have been developed to further increase safety of operations.

A. Checklists

Each division of the flight crew (flightline, autopilot and imagery) have developed checklists that govern their activitiesprior to, during, and following missions with the UAS. Priorto each flight test or mission, the UAS is inspected andoverall system readiness evaluated. The full checklists are extensive, however key portions of several are listed below.

Key pre-mission inspection items include:

• Inspecting propeller, engine, engine mounts and muffler for security, damage and wear

• Inspecting all flight control servos, linkages and surfaces for loose fasteners

• Inspecting all payload modules and associated wiring for security

• Inspecting pitot and static ports for obstruction

• Testing failsafe and aerodynamic termination functions

Immediately prior to engine start:

Airframe checklists include:

• Ensuring all batteries are fully charged and well secured

• Ensuring the fuel and brake air tanks are full

• Ensuring all wing attachment bolts are secure

• Ensuring all flight controls are free and move in the correctdirection

• Ensuring airspace is clear of any other traffic

Autopilot checklists include:

• Ensuring barometric pressure is set and air data zeroed

• Ensuring boundaries and waypoint paths are correctly loaded and set with the correct altitudes

• Ensuring the correct Piccolo is set as the active autopilotin PCC

• Ensuring payload and control system voltages are normal

• Ensuring a GPS lock is acquired

• Ensuring a healthy RSSI on the autopilot link

18 of 20

North Carolina State University - Aerial Robotics Club

B. Autonomous Takeoff Procedures

In a further effort to enhance safety, a pneumatic brake system was included on Fenrir. Coupled to both the manualcontrol and autopilot systems, this not only aids in preventing runway overrun but greatly improves autonomoustakeoff procedures. With the ARCWulf system, a team member was required to hold the aircraft still at the end of therunway and release it immediately prior to throttle-up to begin the autonomous takeoff. This often resulted in abortedtakeoffs due to tracking errors developed between the release and the launch start, requiring a team member to retrievethe running aircraft, return it to the takeoff position and try again. With the addition of brakes, the aircraft is manuallytaxied into takeoff position, the brakes set, and control handed over to the autopilot. When the autopilot operator isready, the launch command is sent, the autopilot releases the brakes and the takeoff proceeds. If the autopilot detectsan error and aborts the takeoff, the autopilot either never allows the aircraft to move or brings it to a stop automatically.The entire autonomous takeoff procedure is safer as a result.

C. Flight Briefing and Communication Procedures

At the start of every flight testing day, all present team members and observers are assembled for an initial briefing.Topics discussed include general safety procedures: All noncritical personnel are to remain behind the plane of thepropeller during engine start and runup. Takeoffs and landings are announced and are mandatory heads-up situationsfor anyone outside the Ground Control Trailer. If any observers plan to photograph or video the aircraft duringflight, they are assigned a spotter to stand with them and monitor their and the aircraft’s safety during such activity.Personnel roles are clearly defined and discussed. Immediately before each flight, an additional pre-flight briefing isheld to discuss goals of the flight, the autopilot flight plan,and any special circumstances of the planned test. Takeoffand landing directions are decided based on the current windconditions. Post-flight briefings are also held to recapthe events and results of each flight and plan any action necessary prior to the next flight.

A general communication protocol is defined and adhered to during flight tests and missions. When transmittingvia the crew’s VHF radios, protocol follows that used for radio communications between manned aircraft. Duringcritical flight phases such as takeoff and landing, only flight-critical information is transmitted. In general, the flightdirector monitors the mission and communicates with various teams, however teams may also communicate directly.Handover procedures between autopilot follow a consistentformat. An example of communication during an auto-takeoff procedure is as follows:

External Pilot: ”Autopilot, aircraft is in position, readyfor takeoff. Set brakes and advise when ready for handover.”Autopilot: ”Brakes are set, ready for handover.”

External Pilot: ”Handover in 3, 2, 1, you have the aircraft.”

–External pilot clicks aircraft into autopilot control–

Autopilot: ”I have the aircraft.”External Pilot: ”You have the aircraft.”

–Following final airspace check –

External Pilot: ”Autopilot, you are cleared to launch at your discretion.”

–Following final autopilot systems check –

Autopilot: ”Launching in 3, 2, 1, launch.”

In-air handover procedures follow a similar format. Special communications during landing includes constantairspeed calls by the autopilot operator to the external pilot. During manual landings, this information is useful to thepilot for ensuring proper approach speeds. During autonomous landings, this is useful for the external pilot to monitorthe autopilot’s performance and detect an impending stall or runway overrun before it occurs, allowing for an earlierand safer abort.

19 of 20

North Carolina State University - Aerial Robotics Club

V. ConclusionsUsing a systems engineering approach, the NCSU Aerial Robotics Club has designed, built, and testedFenrir, aptlynamed after the mythical Norse wolf, to meet the mission requirements of the 2013 AUVSI SUAS competition.Extensive engineering and testing have been performed on the air vehicle, payload, and software systems, resulting ina synergistic system capable of performing all of the KPP’s and most of the optional goals defined in the RFP. The NCState team expects Fenrir to excel in its mission performance on competition day.

20 of 20

North Carolina State University - Aerial Robotics Club