system integration prototype with functional test report · prototype with functional test report...

1

Upload: others

Post on 13-Jul-2020

20 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: System integration prototype with functional test report · prototype with functional test report Lead beneficiary (partner): Karl H. Haugholt / STF Internally reviewed by (name/partner):

UTOFIA 633098

Underwater Time Of Flight

Image Acquisition system

Call H2020-BG-2014-2

Topic BG-09-2014

Research and Innovation Action

Project number: 633098

Project duration: February 2015 – April 2018

Project Coordinator: Jens Thielemann, SINTEF

Website: www.utofia.eu

Deliverable ID: SyGMa ID: Preparation date:

D6.3 D26 01.10.2017

Title:

System integration ‐ prototype with functional

test report

Lead beneficiary (partner):

Karl H. Haugholt / STF Internally reviewed by (name/partner):

Yves Chardard / SUB Approved by:

Executive Board

Abstract:

The goal of UTOFIA is to offer a compact and cost-effective underwater imaging system for turbid

environments. Using range-gated imaging, the system will extend the imaging range by factor 2 to 3

over conventional imaging systems, while at the same time providing video-rate 3D information. This

will fill the current gap between short-range, high-resolution conventional video and long-range low-

resolution sonar systems.

This document describes System Two, the final revision of UTOFIA within the project. It has the most

powerful laser, firmware capable of real time processing, and a compact housing. This deliverable

presents the system as a whole and some first results.

Dissemination level

PU Public, fully open, e.g. web X

CO Confidential, restricted under conditions set out in Grant Agreement

CI Classified, information as referred to in Commission Decision 2001/844/EC

Deliverable type

R Document, report (excluding the periodic and final reports) X

DEM Demonstrator, pilot, prototype, plan designs

DEC Websites, patents filing, press & media actions, videos etc.

OTHER Software, technical diagram, etc.

Authorship Information

Editor Karl H. Haugholt / SINTEF

Partners contributing SINTEF

Page 2: System integration prototype with functional test report · prototype with functional test report Lead beneficiary (partner): Karl H. Haugholt / STF Internally reviewed by (name/partner):

UTOFIA 633098

2 of 61

Release History

Release

number

Date issued Milestone*

Doc.

version

Release description /changes made

1.0 2017-10-26 External approved

* The project uses a multi-stage internal review and release process, with defined milestones. Milestone names include terms (in bold) as

follows:

Planned content and structure proposed: Describes planned contents of different sections. Document authors submit for

internal review.

Planned content and structure revised: Document authors produce new version in response to internal reviewer comments.

Planned content and structure approved: Internal project reviewers accept the document.

Intermediate proposed: Document is approximately 50% complete – review checkpoint. Document authors submit for

internal review.

Intermediated revised: Document authors produce new version in response to internal reviewer comments.

Intermediate approved: Internal project reviewers accept the document.

External proposed: Document is approximately 100% complete – review checkpoint. Document authors submit for internal

review.

External revised: Document authors produce new version in response to internal reviewer comments.

External approved: Internal project reviewers accept the document.

Released: Executive Board accept the document, Technical Manager/Coordinator releases to Commission Services.

Page 3: System integration prototype with functional test report · prototype with functional test report Lead beneficiary (partner): Karl H. Haugholt / STF Internally reviewed by (name/partner):

UTOFIA 633098

3 of 61

UTOFIA consortium UTOFIA (633098) is a Research and Innovation Action (RIA) within Horizon 2020, the European Union's

framework program for research and innovation, Call H2020-BG-2014-2, Topic BG-09-2014. The

consortium members are:

Stiftelsen SINTEF (STF)

NO-7465 Trondheim

Norway

www.sintef.com

Project manager: Jens T. Thielmann

[email protected]

+47 930 59 299

Technical manager: Karl Henrik

Haugholt, [email protected]

Bright Solutions (BRI)

27010 Cura Carpignano, Italy

www.brightsolutions.it

Contact: Giuliano Piccinno

[email protected]

Odos Imaging Limited (ODOS)

Edinburgh, UK

www.Odos-imaging.com

Contact: Chris Yates

[email protected]

SUBSEA TECH (SUB)

13016 Marseille, France

www.subsea-tech.com

Contact/Exploitation manager:

Yves Chardard

[email protected]

Fraunhofer Institute for

Microelectronic Circuits and

Systems IMS (FHG)

80686 München, Germany

http://www.ims.fraunhofer.de/en/ho

mepage.html

Contact: Marc Benger

[email protected]

AZTI-Tecnalia (AZTI)

48395 Sukarrieta, Spain

www.azti.es

Contact: Iñaki Quincoces Abad

[email protected]

DTU-Aqua (DTU)

2800 Kongens Lyngby, Denmark

Contact: Andre Visser

[email protected]

Page 4: System integration prototype with functional test report · prototype with functional test report Lead beneficiary (partner): Karl H. Haugholt / STF Internally reviewed by (name/partner):

UTOFIA 633098

4 of 61

Table of Contents

1 UTOFIA motivation and background ............................................................................................................................ 8

1.1 Relationship with other deliverables .............................................................................................................................. 8

1.2 Contributors ................................................................................................................................................................... 8

2 System Two – overview and performance summary ...................................................................................................... 9

2.1 System Two overview .................................................................................................................................................. 10

2.2 Summary of experience with System Two ................................................................................................................... 12

2.3 Performance summary ................................................................................................................................................. 13

3 System Two components ............................................................................................................................................. 14

3.1 Housing ........................................................................................................................................................................ 14

3.2 Laser ............................................................................................................................................................................ 15

3.3 Beam optics ................................................................................................................................................................. 17

3.4 Camera ......................................................................................................................................................................... 19

3.4.1 Interleaved vs non-interleaved acquisition ................................................................................................................... 19

3.4.2 Binning ........................................................................................................................................................................ 20

3.5 Camera lens and focus motor ....................................................................................................................................... 21

3.6 Cooling ........................................................................................................................................................................ 22

3.7 Electronics board ......................................................................................................................................................... 24

3.8 System monitoring ....................................................................................................................................................... 25

3.8.1 System monitoring output ............................................................................................................................................ 26

3.8.2 Command overview .................................................................................................................................................... 26

3.8.3 System connection diagram ......................................................................................................................................... 28

3.9 Cabling and data transmission ...................................................................................................................................... 28

3.10 Top-side box with power and communication electronics ........................................................................................... 29

3.11 User interface GUI ...................................................................................................................................................... 30

3.11.1 Fileformat .................................................................................................................................................................... 31

3.11.2 Monitoring interface .................................................................................................................................................... 34

3.11.3 Synchronizing with external cameras ........................................................................................................................... 34

4 Functional test report ................................................................................................................................................... 35

4.1 System Two compared to a high-quality standard camera ........................................................................................... 36

4.1.1 Reference camera ......................................................................................................................................................... 37

4.1.2 Range extension ........................................................................................................................................................... 38

4.2 Comparison with echosounder ..................................................................................................................................... 40

4.3 UTOFIA for fish imaging ............................................................................................................................................ 42

4.3.1 Fish length ................................................................................................................................................................... 43

4.3.2 Fish swimming speed ................................................................................................................................................... 44

4.4 Sea tests with target ..................................................................................................................................................... 44

4.4.1 Tests at Dronningen in Oslo ......................................................................................................................................... 46

4.4.2 Tests at Matre, western Norway ................................................................................................................................... 51

4.5 3D precision depending on range and conditions ......................................................................................................... 56

4.6 Optimizing range, resolution and frame rate ................................................................................................................ 57

Page 5: System integration prototype with functional test report · prototype with functional test report Lead beneficiary (partner): Karl H. Haugholt / STF Internally reviewed by (name/partner):

UTOFIA 633098

5 of 61

Table of Figures Figure 1: Range-gating reduces the effect of backscattering....................................................................... 8

Figure 2 System One. ................................................................................................................................ 11

Figure 3: Photos of System Two. .............................................................................................................. 11

Figure 4: Image of internals ...................................................................................................................... 12

Figure 5: Housing. ..................................................................................................................................... 14

Figure 6: How to open the housing. .......................................................................................................... 14

Figure 7: System Two laser ....................................................................................................................... 15

Figure 8: Jitter measurement.. ................................................................................................................... 16

Figure 9: Estimation of laser timing jitter. ................................................................................................ 16

Figure 10: Illumination profile with crossed lenticulars LN611. .............................................................. 17

Figure 11: Horizontal illumination profile.. .............................................................................................. 18

Figure 12: Vertical illumination profile..................................................................................................... 18

Figure 13: Beam optics. ............................................................................................................................. 19

Figure 14: Acquisition of a pallet moving as a pendulum. ........................................................................ 20

Figure 16: Effect of binning on depth estimates........................................................................................ 21

Figure 17: Alternative lens and front flange for a wider field of view. ..................................................... 22

Figure 18: Focus solution for System Two.. ............................................................................................. 22

Figure 19: Pump motor and motor controller. ........................................................................................... 23

Figure 20: Flow rate vs pump height for System One (solid lines) and System Two (dashed lines). ....... 24

Figure 21: PCB overview with description of connections. ...................................................................... 25

Figure 22: System Two power and data cabling.. ..................................................................................... 29

Figure 23: System Two topside box. ......................................................................................................... 30

Figure 24: Screenshot of the GUI. ............................................................................................................. 31

Figure 25: Raw data variables that are stored in the NETCDF file. .......................................................... 32

Figure 26: Attributes of the range gated data. ........................................................................................... 32

Figure 27: Screenshot of Utofia status monitor ......................................................................................... 34

Figure 28: Test setup with Utofia and reference camera. .......................................................................... 36

Figure 29: Datasheet for reference camera. ............................................................................................... 37

Figure 30: Expected range for underwater vision systems. ....................................................................... 38

Figure 31:IDS and Utofia image of box and target plate at the sea bottom. ............................................. 39

Figure 32: Image from the tapered bottom of a fish cage. ........................................................................ 39

Figure 33: Fish imaging. ........................................................................................................................... 40

Figure 34: Image from Didson echosounder of fish in fish cage. ............................................................. 41

Figure 35: Image from UTOFIA of salmon fishes. False color indicates distance to camera. .................. 42

Page 6: System integration prototype with functional test report · prototype with functional test report Lead beneficiary (partner): Karl H. Haugholt / STF Internally reviewed by (name/partner):

UTOFIA 633098

6 of 61

Figure 36: Estimation of length of fish. ..................................................................................................... 43

Figure 37: Estimation of swimming speed and direction. ......................................................................... 44

Figure 38: The target (left) is lowered in a rope under the camera and is used for measuring signal and

contrast. ..................................................................................................................................................... 45

Figure 39: Tests at Dronningen ................................................................................................................. 46

Figure 40: Utofia result for 5 m range and 4 averages per delay step. ...................................................... 47

Figure 41: UTOFIA (left) and IDS images (right) for 5 m range. ............................................................. 48

Figure 42: UTOFIA (left) and IDS images (right) for 5 m range. ............................................................. 48

Figure 43: UTOFIA (left) and IDS images (right) for 7 m range. ............................................................. 49

Figure 44: UTOFIA (left) and IDS images (right) for 5.5 m range. .......................................................... 50

Figure 45: UTOFIA (left) and IDS images (right) for 6.5 m range. .......................................................... 50

Figure 46: Map showing location of Matre research farm. ....................................................................... 51

Figure 47: Overview of the test facility at Matre. ..................................................................................... 51

Figure 48: Water attenuation profile at Matre. .......................................................................................... 52

Figure 49: IDS camera, 9.7 m distance. .................................................................................................... 53

Figure 50: Utofia images at 9.7m distance. ............................................................................................... 53

Figure 51: IDS camera, 13.5m distance. ................................................................................................... 53

Figure 52: Utofia images at 13.5 m distance. ............................................................................................ 54

Figure 53: IDS camera, 10.3 m range. ....................................................................................................... 54

Figure 54: Utofia images at 10.3 m range. ................................................................................................ 55

Figure 55: IDS camera, 13,9m range. ........................................................................................................ 55

Figure 56: Utofia images at 13.9m range. ................................................................................................. 56

Figure 57: Depth precision versus SNR. ................................................................................................... 56

Figure 58: Frame rate versus image resolution. ........................................................................................ 57

Figure 59: Low-light vs 3D mode ............................................................................................................. 59

List of Tables Table 1: System Two specifications based on the revised end users survey D1.2 (2016) ........................ 10

Table 2: Data for two relevant pump settings............................................................................................ 23

Table 3: Command overview for system monitor ..................................................................................... 27

Table 4: Connection list for Arduino system............................................................................................. 28

Table 5: Identified risks for System Two from D1.1. Green indicates risk cleared, yellow risk still present

for System Two. More details of each individual risk can be found D1.2. .............................................. 60

Page 7: System integration prototype with functional test report · prototype with functional test report Lead beneficiary (partner): Karl H. Haugholt / STF Internally reviewed by (name/partner):

UTOFIA 633098

7 of 61

Executive summary This document presents System Two which is the final hardware version of the UTOFIA camera.

System Two fulfils most of the design goals set for the system. An overview is given along with

performance parameters and first-hand experiences using System Two. Experimental results from test

trials and results of components performed on system level are also presented. Finally, the individual

components comprising System Two are briefly described.

System Two was built to:

• Demonstrate the technology towards potential markets.

• Be as small as possible within the projects limits to hopefully appear practical for customers.

• Become an experimental tool to test and verify new possibilities.

Two factors have greatly contributed to a significant size reduction;

• a new and extremely compact small diameter laser

• a support flange in the middle of the housing that allowed a thinner housing wall.

The camera was already small but has been modified to overlap somewhat with the laser to reduce total

length. Unfortunately, the demand for a wider field of view has resulted in a much longer lens. Still the

housing volume is less than 7 litres and with potential for future length reduction.

For the software/firmware part of System Two, focus has been put on handling low intensities,

visualization and 3D algorithms. For the hardware part focus has been on more laser power, reduced size,

pressure resistance to more than 300 meters and a larger field of view.

Compared to System One, this system provides numerous improvements. In summary, our experience is

that:

• Housing has been tested and found leak safe to depths of at least 250 meter.

• System communication has passed the test through 70 m submerged cable.

• Laser from BRI has passed tests and is successfully included in System Two, showing low jitter.

• The thermal control, including feedback from thermistor and motor control are effective

• A new interface board connecting camera, laser, Arduino µ-controller has successfully been set up

to handle the thermal control as well as the motor, lens focusing motor and laser monitoring.

• The reduced size of System Two is a valuable feature that makes operation simpler

• System Two delivers live 3D data of high quality

• Range gating works well. Even in poor water conditions the backscatter is suppressed very well.

• The new firmware algorithms work as well as expected with a framerate of higher than 10 Hz.

We have performed preliminary testing in two practical applications – sea bed observation and fish

farming. Based on these preliminary tests, we see that Utofia provides good images compared to a non-

gated reference system. The combined 3D and intensity images make a real difference. In detail:

• Utofia eliminates backscatter and obtains images with good contrast. The limitation is the signal

to noise when illumination is attenuated by the water.

• The 3D information makes a difference not only towards enabling biomass estimation and object

sizing, but also to enhance contrast of underwater objects which otherwise are easily overlooked.

• The combined display of backscatter-free images and depth information provides more pleasant

images that are easier to interpret for the operator.

• Utofia's performance is in the middle of expected performance for range gated system, providing

data up to approximately 4.5 attenuation lengths (5 m harbour waters, 15+ meter sea water)

We believe this system will open several possibilities within the marine sciences, especially with regards

to fish behavior and size estimation.

Page 8: System integration prototype with functional test report · prototype with functional test report Lead beneficiary (partner): Karl H. Haugholt / STF Internally reviewed by (name/partner):

UTOFIA 633098

8 of 61

1 UTOFIA motivation and background

UTOFIA will offer a compact and cost-effective underwater imaging system for turbid environments.

Using range-gated imaging (Figure 1), the system will extend the imaging range by factor 2 to 3 over

conventional video systems. At the same time, the system has the potential to provide video-rate 3D

information.

This will fill the current gap between short-range, high-resolution conventional video and long-range low-

resolution sonar systems.

UTOFIA offers a new modus operandi for the main targeted domains of application: marine life

monitoring, harbour and ocean litter detection, fisheries stock assessment, aquaculture monitoring, and

seabed mapping.

Figure 1: Range-gating reduces the effect of backscattering. In this figure an underwater object at

a distance of approx. 9m is imaged. The graph shows the reflected signal from a laser pulse as a

function of time. The first peak of the curve corresponds to backscattering from particles in the

water. The second, attenuated peak corresponds to the reflection from the object that we are

interested in (e.g., a lobster). The camera shutter is kept closed for approximately 50ns before it

opens. Since the image is created from an integration of all light received, when the first 50 ns is

gated out, most of the backscattering contribution to the fundamental noise is removed.

1.1 Relationship with other deliverables

The system presented in this document relates on the following deliverables:

D1.1 – Preliminary end user requirements and system specification: This document presents the baseline

design of System One

D1.2 – Revised end user requirements and system specifications: This document summarizes the

differences between the plan in D1.1 and the System One actually built, and also the specs for System

Two.

D2.3 – Describes the laser used by System Two.

D4.3 – Presents the housing used by System Two, and also the detailed beam shaping optics.

D5.4 – Details the capabilities of the firmware embedded in System Two.

These deliverables are consortium-internal, so some of the relevant illustrations/information has been

copied in here for the benefit of consortium-external readers.

1.2 Contributors

The following partners have contributed to this deliverable: SINTEF

)

Gate closed

Laser

Irra

dia

nce

at

rece

ive

r

time

Gate open

0

5

10

15

0 20 40 60 80

Co

ntr

ol

Same laser pulse at different time instances

Particles close to the sensor cause backscattering

Sample target / scene objects we want to visualize

*

(

Camera

Page 9: System integration prototype with functional test report · prototype with functional test report Lead beneficiary (partner): Karl H. Haugholt / STF Internally reviewed by (name/partner):

UTOFIA 633098

9 of 61

2 System Two – overview and performance summary

System Two was built to:

• Demonstrate the technology towards potential markets.

• Be as small as possible within the projects limits to hopefully appear practical for customers.

• Become an experimental tool to test potential and/or verify performance for new possibilities.

The baseline design for System Two is given in D1.2, summarized in Table 1. A technical successful

System One laid the ground for System Two. With all the "basic functions" like cooling and

communication working, the focus was to reduce the diameter. Two factors have greatly contributed to a

significant size reduction;

• a new and extremely compact small diameter laser

• a support flange in the middle of the housing that allowed a thinner housing wall.

The main trade-off was between laser volume and laser power. Having experienced how much more

practical System One was compared to System Zero the bias was towards a smaller size. This resulted in

keeping the laser configuration but tune the power up and squeeze the diameter (and volume) as much as

possible. The Odos camera was already small but was modified to overlap somewhat with the laser to

reduce total length.

Unfortunately, the demand for a wider field of view has been difficult to fulfil. Large sensor area combined

with short focal length and a large aperture gives a long lens with a large diameter. The focal length from

System One was kept but a lens that could cover the whole sensor chip was chosen. This resulted in a

longer lens. This lens increases the FOV by 50%. However, to read out the whole chip takes longer time

and the frame rate drops. This means that performance is lost when using maximum field of view. And

even at maximum FOV we are not fulfilling the customer's requirement. This is not a fundamental problem.

A shorter focal length lens in the same series exist. This could give us a FOV around 70 degrees. In

retrospect, this could have been done. SUB came up with a very nice design for the front flange that could

have been further developed to handle a larger diameter lens. This lens will require a complex window

supporting structure and maybe stronger materials. There will be very little material between the windows.

It is not clear that this could have been done within the budget. It will however, be possible to find a

solution for limited depths.

Compared to System One, this system provides numerous improvements in terms of compactness, laser

power, depth rating, firmware and software analysis algorithms. Housing volume is less than 7 litres and

there is potential for future length reduction.

The System Two camera is mechanically similar to the front of a standard ODOS camera and uses the

14µm pitch sensor from ODOS. All tests are done with this sensor. The new FHG chip could be used since

it will fit inside the front of an Odos camera, with a small spacer under the camera window to compensate

for the somewhat higher sensor socket.

Firmware inside the camera is updated and includes new features. The main difference is that the camera

will give depth estimates at > 10 Hz rep-rate. There are also numerous other features to improve image

quality. Like taking background images for automatic background subtraction, do pixel binning to increase

signal to noise.

Except for the somewhat lower field of view, System Two fulfils all the requirements given in D1.2, as

summarized in Table 1. Furthermore, almost all of the relevant risks have been cleared as described in

Appendix C.

System Two will be fully capable of revealing the potential for this technology in numerous applications.

The 3D feature seems to be particularly interesting for sizing and bio estimates.

Page 10: System integration prototype with functional test report · prototype with functional test report Lead beneficiary (partner): Karl H. Haugholt / STF Internally reviewed by (name/partner):

UTOFIA 633098

10 of 61

Table 1: System Two specifications based on the revised end users survey D1.2 (2016)

UTOFIA basic specifications System One Status for

System Two

Range 0.5 to 5 m(1) OK

FOV 70° to 90° 31° Nom 45°

Max 49°

Depth 300 m (2) 100 m

3D Not mandatory(3)

Power < 300 W

Voltage 12 or 24 V

Weight in air < 10 kg 12,9 kg 9 kg

Weight in water neutral 2 kg

Laser safety class Max 3R

(1) Operator adjustable through GUI

(2) 1 000 m version for the future

(3) Desired range for 3D: 5 m

2.1 System Two overview

System Two has been built according to the plan sketched out in D1.2. An uncertainty was related to the

laser diameter and volume. BRI was able to fit the laser into a 130mm inner diameter tube which was our

most aggressive goal. With the laser size decided, a detailed design of System Two was done. This design

is reported in D4.3, an overview is shown in Figure 2. Then all parts described in D4.3 were produced.

During integration the details around cabling, voltage limiter, thermal fuse, focus motor and laser beam

shielding was worked out. The dense packing of components gave some EMC challenges mainly affecting

Ethernet communication. This problems was greatly reduced by shielding and rewiring.

This update is illustrated in Figure 2. Figure 3 and Figure 4 shows external and internal view of actual

system. As one can see the outline from D4.2 is implemented with only minor deviations.

The housing of System Two has a diameter of 155 mm and a length of 370 mm. The total volume is 7

litres. The housing is designed to withstand the pressure at 300 meter depth. The housing is tested at 250m

depth (25 bar).

Page 11: System integration prototype with functional test report · prototype with functional test report Lead beneficiary (partner): Karl H. Haugholt / STF Internally reviewed by (name/partner):

UTOFIA 633098

11 of 61

Figure 2: System One as outlined in D4.3 Laser (red) is mounted to the back flange and holds the

"middle flange". Camera main body is mounted to the middle flange. Power and control

electronics is mounted to the blue part. The blue part is the back lid of the camera and is fed

through a hole in the middle flange.

Figure 3: Photos of System Two. Left; Complete Utofia system with PC, topside power, cable and

housing. Right: Detail of rear part of housing. The thermal isolating flange is easy to remove for

cleaning and inspection.

Page 12: System integration prototype with functional test report · prototype with functional test report Lead beneficiary (partner): Karl H. Haugholt / STF Internally reviewed by (name/partner):

UTOFIA 633098

12 of 61

Figure 4: Image of internals with, from left, lens, focus, camera housing, interface electronics (top)

and laser (below), thermal fuse, back flange with voltage limiter (heater), pump and connector.

The Arduino computer is under the grey flat cable and is connected directly to the interface circuit

board.

Primary improvements are mostly the inclusion of a heat transfer element between camera and back flange.

The new laser does not have a thick baseplate thermally connected to the back flange. This meant that

thermistor and voltage limiter had to be mounted on the back flange which has just a little free area. A

piece of aluminium was used to connect the camera thermally to the back flange to give a more uniform

inside temperature. On this piece, the thermal switch is mounted.

The housing for System Two has a diameter of 155 mm and a length of 370 mm (6.98 litres). The total

volume is a little more due to the connector and pump housing. The weight is around 9 kg. The housing is

designed to withstand the pressure at 300 meter depth and is tested 25 at bar (250 m).

2.2 Summary of experience with System Two

The first thing to notice is that System Two is much more like a product compared to System One. There

are features included that facilitate operations. There are handles for carrying, eye-bolts for fixation and

tools for disassembling. The smaller size makes it easy to transport the system. The weight is larger than

the buoyancy so it will sink. In operation that means that we do not need to put weights on it to make it

sink. It will be possible to thin and lighten the flanges if neutral buoyancy is important. An alternative

would be to add external buoyancy foam for weight sensitive platforms (e.g.: mini ROVs).

The machining of the parts seems to be good and accurate. The camera aligns well with the front flange

when everything is mounted together. The front and back flange are not rotational fixed to the housing. If

the back flange is forced to rotate relative to the front flange something inside will break. Under assembly

one must take care that the lens and laser optic is aligned with the front flange. The laser optic serves at a

key to fix the rotation but it is not very strong.

The new beam optics works well. We have better efficiency and better uniformity and larger beam

divergence than before.

The denser interior and the new layout of the camera gave ground loops and interference that knocked out

the communication board. This was unexpected since we did not see sign of it in System One. The cables

where rerouted, shielded and the communication board was isolated. Moving the communication board

from the camera housing to directly above the laser gave a penalty. Even the laser is in an aluminium

housing the Q-switch current gave interference in a critical frequency range/domain, probably through the

grounding cables. The rerouting of the cables affected the planned feature with PC controlled additional

heating at start up. The denser layout has also affected the timing jitter of the laser. The timing jitter is

around 200 ps and is dominated by noise in the trigger circuit rather than fundamental laser properties.

Page 13: System integration prototype with functional test report · prototype with functional test report Lead beneficiary (partner): Karl H. Haugholt / STF Internally reviewed by (name/partner):

UTOFIA 633098

13 of 61

With a more powerful laser and a smaller housing heating to operating temperature is much quicker and is

around 5 minutes. To boost the heating the top side box voltage could be increased. The voltage limiter

will draw more current such that the voltage drop in the cable will correspond to the voltage added. In

normal operation, there will be no or little current through the voltage limiter. When laser is idling the

voltage, limiter will draw current and limit the temperate drop inside the housing.

The new firmware developed by Odos works very well. Most of the 3D processing is now done in the

camera. This has boosted the frame rate. When the distance calculation is done in the camera, the data

amount is greatly reduced. This allows a higher frame rate. The camera will send out an intensity image, a

depth map and 3D signal information, at more than 10Hz. A new feature is binning of pixels. Binning

reduces the data amount and increases the signal to noise. The cost is lower resolution. It is possible to

have more binning in the depth image than in the intensity image. This is a good thing since the depth

calculation is more sensitive to noise and the distance will vary more slowly than the intensity. Binning is

a solution if you have to operate at a 100Mb/s Ethernet line.

The new visualization software gives nice images and stores image data. The new software lets the user

control the camera to get the most out of the actual imaging situation/condition. The Utofia system is quite

flexible and can be configured in many ways. The amount of control parameters could be overwhelming

and difficult to optimise. With more experience, better guidelines and standards can be developed. The

software is well suited to explore the fundamental limitations of the system. The software has been

developed in MatLab, C++ and C#. While the real time acquisition loop runs in C++/C#, Matlab is used

for the main display. In the project, this has provided important benefits towards agility of development,

but now also means that doing display rates beyond 10 Hz is difficult. This was a trade-off early in the

project that we have to live with now.

2.3 Performance summary

We have tested System Two in two different water qualities. In both situations, we end up with the same

conclusion. Utofia is able to take useful images up to around 4.5 attenuation lengths. One attenuation length

is the distance light is attenuated to 1/e (37%). In "1.5 m water", we saw a target at 7 m distance. In water

gradually improving with depth from 2.5 to 4 m we saw a target at 14 m. In very clear water the signal

drop given by the inverse of the distance square will be significant. Utofia is able to eliminate backscatter

and obtain images with good contrast. The limitation is the signal to noise when illumination is attenuated

by the water.

The standard deviation of the distance values in the 3D images is estimated to 1-2 cm up to 3 attenuation

lengths for a white target. A 10% reflectivity target at 3 attenuation lengths will have a distance variation

of 5-10 cm. At 14 m range, we got a distance variation of 10 cm for a white target. In summary:

• Intensity and 3D images up to 4,5 attenuation lengths

• 3D precision 1-2 cm up to 3 attenuation lengths

We believe this system will open up many possibilities within the marine sciences, especially with regards

to fish behavior and size estimation.

Page 14: System integration prototype with functional test report · prototype with functional test report Lead beneficiary (partner): Karl H. Haugholt / STF Internally reviewed by (name/partner):

UTOFIA 633098

14 of 61

3 System Two components

3.1 Housing

This section provides only a brief overview of the housing, with the detailed design being described in

D4.3.

For practical reasons, some changes have been done. The windows are 15 mm thick due to delivery

problems with 20 mm acrylic plate with high optical quality. The pressure resistance of the 15mm thick

camera window is calculated to 28 bar (280m) by SUB. That was assumed to be OK for System Two.

The total weight of the housing included eyebolts and brackets is 9,9 kg. The volume is 7 litres.

Figure 5: Housing. Left; Housing resting on the floor. Middle; Front of housing. The external vertical

diffusor is in the upper part attached with two M6 bolts. Two eye-bolts with zinc anodes on the sides. A

bracket is mounted under the camera window to be able to rest the housing on the floor. Right; back of the

hosing with cable, eye-bolt, cooling water outlet (orange) and pump housing with inlet filter.

Details of the housing is shown in Figure 5. A handle could be mounted on the front flange for carrying

(Figure 6, middle image). A bracket in the front makes I possible to place the housing on the floor without

damaging the windows. The easiest way to carry the housing is by a rope in the eye-bolt in the back flange.

Under operation the housing should be secured by a rope in both front and back flange. The thermal

insulation on the back flange is easy to remove for service and inspection of the cooling canal. The house

could be opened according to the instruction given in (Figure 6). When the front flange is removed the

windows can be pushed out and changed if necessary.

Figure 6: How to open the housing. Behind the bracket in the front there is a draw bead that

secures the front flange. This is the green plastic filament in the lower part of the left picture. Pull

this out with a plier. Mount the handle as seen in the middle picture and pull the front flange out.

For the back flange, remove the eye-bolt in the back flange (right picture). Under the bolt is a M8

thread. Screw in a long M8 bolt and pull the thermal isolation flange out. Under the thermal flange

is a draw bead as in the front flange. Pull it out, mount the handle bar and pull out the back flange.

Page 15: System integration prototype with functional test report · prototype with functional test report Lead beneficiary (partner): Karl H. Haugholt / STF Internally reviewed by (name/partner):

UTOFIA 633098

15 of 61

3.2 Laser

The System Two laser is shown in Figure 7. The new laser is more compact and delivers more power than

the System One laser. It has been operating faultless during all the tests. The laser has built in safety

features that protects the laser. When the power is turned on the laser will operate normally except that the

q-switch is not triggered before the temperature is in the right interval. So, there is no laser emission before

the laser is in the correct temperature interval. This interval is several degrees and it is easy to keep the

laser at the correct temperature for the cooling system. The laser operates at a single supply voltage, the

built in DC-DC converters operates at an input voltage between 20 and 30 volts. Outside this interval the

laser is shut off.

The laser has a mounting flange in the rear. All the inside components are mounted on this flange. This

flange is mounted on to the back flange. Thermal grease is used to minimize the temperature difference.

By stabilizing the back flange to 22 °C optimum temperature conditions are obtained inside the laser. The

laser temperature is given by the diodes used to excite the active atoms inside the laser crystal. For optimum

excitation the emission wavelength must be temperature tuned to match the absorption wavelength of the

laser crystal. The operation temperature can be changed by ordering diodes with a different nominal

wavelength at 25C. The maximum seawater temperature for optimal operation of System Two is 20°C.

Above that the performance will decrease. The laser will operate with somewhat reduced power up to a

seawater temperature around 23°C. Above that, a protection circuit will shut down the laser.

In the front of the laser is a beam expanding lens, mounted on the middle flange. The middle flange is there

to support the housing and is not part of the laser. Between the rear flange and the middle flange there is a

two-piece cover, one flat part with connectors and one curved part. System tests showed that it was

necessary to tape the two parts together with copper tape to reduce electromagnetic interference from the

laser. System Two is very dense in the rear part and electromagnetic interference (EMI) has been a larger

issue than expected.

Figure 7: System Two laser, out of the box to the left and mounted to the right. The laser emits 3-

3,5 mJ per pulse up to 1kHz repetition rate. The timing jitter is 200 ps RMS.

Page 16: System integration prototype with functional test report · prototype with functional test report Lead beneficiary (partner): Karl H. Haugholt / STF Internally reviewed by (name/partner):

UTOFIA 633098

16 of 61

Figure 8: Jitter measurement. The camera trigger signal is picked out from the laser signal

connector (blue trace in right picture). The laser pulse is picked up by a high-speed detector (green

trace in right picture). The RMS timing jitter between the pulses is measured by an oscilloscope

function to around 200 ps. Just before the laser pulse an interference from the q-switch can be

picked up. There is less than 50 ps jitter between q-switch and laser.

The laser jitter has been measured in two ways both giving around 200 ps jitter. This is a respectable

number and we believe it can be even better. The timing jitter between camera trigger pulse and laser pulse

emission is measured with an oscilloscope as described in Figure 8. The main contribution to the jitter

seems to be noise in the q-switch trigger circuit inside the laser generated by DC-DC converters inside the

laser. As can be seen in Figure 8 the interference picked up on the trigger signal and on the laser shield

(yellow trace) is easy to see. The frequency content of this signal seems to be in a sensitive domain for the

Ethernet communication module. Initially the communication module was knocked out as soon as the q-

switch was turned on.

The timing jitter was also estimated from several delay sweeps. A sweep is shown in Figure 9. Comparing

mean image intensity in the central part of the image, for several consecutive sweeps reveals information

about laser intensity noise and jitter.

Figure 9: Estimation of laser timing jitter. The blue curve in the left picture shows a delay sweep

for a flat target. Intensity variation is measured at delay step 14 and 20, shown to the right. At

delay 14 timing jitter will contribute to timing jitter due to the slope in the camera response. A

jitter of 200 ps corresponds to the observed increase in intensity noise.

In Figure 9 we show relative intensity variations for Intensity noise, taken at sweep position 20, where

timing jitter has little or no effect on measured intensity, and jitter noise, taken at sweep position 14, where

timing jitter has large effect on measured intensity. We see that the intensity noise is 2.2 %, well within

specs. Jitter noise and intensity noise combined is 7.1 %, which gives a jitter noise alone of 6.8 %. From

the sweep, we observe that the intensity increases by 56 % pr delay step (or 33 % pr ns) at delay step 14.

Page 17: System integration prototype with functional test report · prototype with functional test report Lead beneficiary (partner): Karl H. Haugholt / STF Internally reviewed by (name/partner):

UTOFIA 633098

17 of 61

The observed jitter noise of 6.8 % can then be explained by a jitter of 200 ps. This is in line with the

measurements performed with the oscilloscope. In this set-up, the housing is closed. Potential noise from

test wires and oscilloscope trigger errors are eliminated.

3.3 Beam optics

The beam expansion consists of the laser lens, mounted on the laser housing, giving a beam divergence of

around 6° (FWHM), and two crossed lenticular arrays mounted near the exit window. One lenticular array

is glued onto the inside of the laser exit window, and one is mounted outside of this window. In

combination, this gives a nearly top-hat, rectangular beam profile. Several configurations have been

designed, each corresponding to a desired image format (pixel resolution). The default illumination profile

is shown in Figure 10.

Figure 10: Illumination profile with crossed lenticulars LN611. This image is taken with the

underwater housing submerged in an aquarium looking through a side wall. The external diffusor

must be submerged to operate as intended.

With the system submerged in a water tank, looking at a curtain in air, we get the following results. The

image resolution was measured to 477 pixels / meter at a distance of 260 cm, giving 2.1 mm / pixel. This

is a horizontal field of view of 52 degrees (full angle) in air using 1216 pixels as width. Using camera data

with 14 µm pixels and 17.5 mm focal length, the corresponding value is also 52 degrees, which corresponds

to 38 deg in water.

Figure 11 and Figure 12 show the observed intensity profile measured both using the camera and by

moving a power meter across the scene. The x-values are scaled using the results above, the intensity values

are scaled based on the power meter measurements and the size of the power meter measurement head.

Both horizontal and vertical fields of illumination have the width expected from Zemax optical simulations,

however, the profiles are slightly less top-hat than expected. This could be due to imperfect lenticular shape

or other effects.

Page 18: System integration prototype with functional test report · prototype with functional test report Lead beneficiary (partner): Karl H. Haugholt / STF Internally reviewed by (name/partner):

UTOFIA 633098

18 of 61

Figure 11: Horizontal illumination profile. The internal diffuser is an acrylic lenticular array,

LN611. The blue profile is measured by the camera and the red is measured by a power meter

scanning the target. The difference is caused primarily by vignetting.

Figure 12: Vertical illumination profile. The external diffuser is an acrylic lenticular array, LN611.

The profile is measured by the camera.

Integrating over the entire image, we reach an estimated power at the scene of 1.5 W at 850 Hz repetition

rate, corresponding to 1.8 W at 1000 Hz. This is in reasonable agreement with the expected power reaching

the scene, taking into account the reflection and transmission losses of two lenticulars, housing window,

water and glass wall, and vignetting from the lens.

The field of illumination can be increased by using diffusers made of polycarbonate (PC). Polycarbonate

has a higher index of refraction, 1.59 compared to 1.49 for acrylic (PMMA). The same lenticular profile

(LN611) will give a wider illumination. We have made one laser window with PMMA diffusor and one

with PC diffusor. There are also two versions of the external diffusor, one PMMA and one PC. This is

shown in Figure 13.

Page 19: System integration prototype with functional test report · prototype with functional test report Lead beneficiary (partner): Karl H. Haugholt / STF Internally reviewed by (name/partner):

UTOFIA 633098

19 of 61

Figure 13: Beam optics. Left image: A lenticular array is glued to an acrylic window. The upper

left part shows the window with PC diffusor. Under window external diffusors in PMMA and PC

are shown. Right image shows the window mounting and the mounting holes for the external

diffusor.

3.4 Camera

The System Two camera is mechanically similar to the front of a standard Odos camera and uses the 14µm

pitch sensor from Odos. All tests are done with this sensor. The new FHG chip could be used in the future

with minor modifications of the housing. It will fit inside the front of an Odos camera but the FHG sensor

board uses a socket resulting in a longer camera. Since the FHG camera has a smaller chip another lens

will probably be needed.

Firmware inside the camera is updated and includes new features that are defined in D5.4. The main

difference is that the camera will give depth estimates at > 10 Hz rep-rate. There are also numerous other

features to improve image quality. Like taking background images for automatic background subtraction,

do pixel binning to increase signal to noise and interleaved sequencer acquisition.

Extensive functional testing was performed of the added functionality. Here we will summarize the main

findings.

3.4.1 Interleaved vs non-interleaved acquisition

Assuming we would like to acquire images at R ranges and 2 accumulations per range, a non-interleaved

acquisition would sequence these images as follows: 𝐼11, 𝐼2

1, 𝐼12, 𝐼2

2, … , 𝐼1𝑅 , 𝐼2

𝑅, while an interleaved

acquisition would sequence these images as follows: 𝐼11, 𝐼1

2, … , 𝐼1𝑅 , 𝐼2

1, 𝐼22, … , 𝐼2

𝑅.

In the sweeps that we acquire, the first image of the sweep is most often a "background" image, i.e. an

image gated far away (>20m) such that a minimal number of photons hit the sensor. We have found that it

is important to have an as up-to-date background image as possible to reduce the effect of fixed-pattern

noise when we are operating in low-light situations. Interleaving the sequence provide us with the most

recent estimate of the background image.

There is a trade-off with respect to interleaving the sequence when there is a relative motion between the

camera and objects of interest. A result of interleaving the acquisition is that the total exposure time of a

single range image is the total acquisition time of the whole sweep. If the sequencer is setup to acquire

sweeps at 10Hz with interleaving, the exposure time is 100ms. In Figure 14 we show the effects of

interleaving the acquisition on both the depth map as well as the intensity image. A pallet is hanging from

the ceiling and moving as a pendulum left to right. At the boundary between the pallet and the background

there is a large depth boundary (and intensity boundary). The top row show an intensity image and an

Page 20: System integration prototype with functional test report · prototype with functional test report Lead beneficiary (partner): Karl H. Haugholt / STF Internally reviewed by (name/partner):

UTOFIA 633098

20 of 61

associated depth image without interleaving. Notice that the intensity boundary is crisp because the total

exposure for this intensity image is low (100ms/(N*R)*2), while the depth map boundary is noisy because

the pallet has moved in between the exposure at the different ranges. The bottom row show corresponding

images with interleaving. Notice the lack of focus/blurred intensity image because the intensity image has

an exposure time which equals the total acquisition time for the sweep (100ms). However, the depth

boundary is less noisy than for the non-interleaved case because all ranges are acquired much closer in

time.

Figure 14: Acquisition of a pallet moving as a pendulum at approximately 3m distanse from

camera. We acquire image with/without interleaving with 8 accumulations. The top row shows an

acquisition without interleaving, while the bottom row shows results when interleaving. Notice that

the intensity image (left column) is blurred when interleaving is enabled, while the depth map

(right column) is more noisy when interleaving is disabled.

3.4.2 Binning

The camera offers a number of different image streams (full sweep, intensity image and four images related

to the peak (depth) estimation algorithm) that can be streamed over the Gigabit ethernet cable to the host

pc. However, because of bandwidth constraints, it is not possible to transfer full resolution images for all

these streams at 10Hz over the 1Gbit Ethernet cable – hence it is necessary to bin images to reduce the

size.

The camera offers independent binning of each of the three major image streams (full sweep, intensity

image and depth image). Another major benefit, besides data reduction, of binning images is to increase

the SNR of the images – this is especially important to do before estimating depth images. In Figure 15 we

show depth images with different binning of a target at a distance of approximately 7.5m (attenuation

Page 21: System integration prototype with functional test report · prototype with functional test report Lead beneficiary (partner): Karl H. Haugholt / STF Internally reviewed by (name/partner):

UTOFIA 633098

21 of 61

length of the water is approximately 2.2m). In the left image we have binned intensities 2x2 before depth

estimation, while in the middle image we have binned 8x8. The increase in SNR by a factor of 4 makes it

possible to get a better (less noisy) depth estimates of the box that can be seen in an intensity image in the

rightmost image.

Figure 15: Effect of binning on depth estimates. A box is located approximately 7.5m away from

camera. Water attenuation length is approximately 2.2m. Left image shows the depth estimates

when binning intensities 2x2, while middle image shows depth image after binning 8x8. Notice the

more reliable depth estimates in the middle image which are due to the increase in SNR.

3.5 Camera lens and focus motor

To increase the field of view a lens for a larger image sensor was chosen for System Two. The lens chosen

was a Voigtlander Nokton lens with 17.5 mm focal length and f/# 0.95. This choice has some limitations

for two reasons. To utilize a larger field of view the frame rate will drop. That reduces the average laser

power and we need laser power to get good images. Secondly a larger picture takes longer time to process

and the live feeling at 10 Hz or more is lost. Therefore, full resolution has not been used. We have however

changed the standard image format from 800x600 to 1000x500. The increased width makes orientation

under water easier. On the other hand, a narrower field of view gives a larger aperture that can better see

through particles in the water. A narrow illumination will also give a better range.

Ideally, we wanted a somewhat shorter focal length than 17.5mm. The only possibility we have found for

a shorter focal length is a Voigtlander Nokton lens in the same series with 10.5mm focal length. That is a

large step down. This lens is even bigger than the 17.5mm focal length lens and the lens size has been an

issue for System Two. BRI has been able to find a very good mechanical solution for the front flange that

handles the bigger lens very well. Having System Two there and be able to check the safety margins it

turned out that the 10.5mm lens could be fitted with small modifications of the front flange. A new front

flange has been made but is not tested. A picture of the new front flange and the new lens is shown in

Figure 16.

Page 22: System integration prototype with functional test report · prototype with functional test report Lead beneficiary (partner): Karl H. Haugholt / STF Internally reviewed by (name/partner):

UTOFIA 633098

22 of 61

Figure 16: Alternative lens and front flange for a wider field of view. As can be seen in the right

picture there is not much space between the lens and the laser window.

SUB has done simulations of the new flange and it should be able to withstand more than 30 bar (300m).

The weakest part is the 15mm thick window which could handle 28 bar. Future tests will show if this

solution will meet the expectations. We expect a wider field of view with full frame rate but with lower

resolution and range.

System Two uses the same focus solution as System One but in a smaller version. A 3D printed ring fits

exactly into the grooves in the lens focus ring. Three plastic screws secure the ring. There is a hole for a

link arm in the 3D printed ring. A short link arm connects the ring to the servo arm. The servo is a thin

wing (RC) aircraft servo with metal gear (A7050 HV Thin Wing Hi-torque MG Aircraft Servo). The focus

range is from infinity to around 1 m. The focus solution in compatible with the 10.5mm focal length lens.

The solution is shown in Figure 17.

Figure 17: Focus solution for System Two. From the side in the left image and from the from in the right

picture. There is much less space around the camera in System Two. There is a couple of millimeters

clearance to the housing wall for the servo and link arm as can be seen in the right picture.

3.6 Cooling

The System Two pump was tested, and compared with measurements from System One. From the diagram

in Figure 19, it is clear that both pump height at zero flow and volume flow at zero height are much lower

than for system 1. A higher flow resistance and a lower pump efficiency could be an explanation for this.

The pump consumes more power at the same motor setting, compared with System One.

The reason for this seems to be the motor controller. The pump efficiency is not a critical parameter. 35

watt pump power is a small fraction of the total power. However, when using the 70m cable the pump

current lowered the housing voltage so much that the laser stopped. The pump setting was lowered from

"60" to "50" to make the system operate normally. Such a low setting will not give sufficient cooling in

warmer sea water.

As far as we have been able to test, the problem is in the motor controller (shown in Figure 18). The motor

controller used in System One is no longer available. We were forced to use a "similar" controller. Lately,

a new version of the previous controller became available in version with higher power rating. With the

new version, we were able to do relevant experiments with the programmable motor settings. The present

controller in System Two could not be programmed to comply with the pump motor. (We have an in-

runner motor and the controller is strictly for out-runner motors.) The result is a dramatic low efficiency.

Page 23: System integration prototype with functional test report · prototype with functional test report Lead beneficiary (partner): Karl H. Haugholt / STF Internally reviewed by (name/partner):

UTOFIA 633098

23 of 61

The calculated pump work is less than 1W even with a longer and narrower canal in System Two. Most of

the energy is used for just turning the motor.

As soon as the experiments for this deliverable is done we plan to change the motor controller. Lab

experiments shows that we should expect half the power consumption with the latest version of the old

controller. Hopefully we will have the same performance as System One.

With a pump motor setting of M=60, the temperature rise of the cooling water with a 200 W load, is 3.3°C.

This number should come down to less than 1°C with a suitable controller. This will raise the limit for the

water temperature in which the system can be used.

Figure 18: Pump motor and motor controller. The pump motor is red and can be seen in the left part of the

image. It is a brushless AC motor. The motor controller is a 20A Skywalker from Hobbywing. It can be seen

in the right part of the image. The switching frequency is too low for the pump motor resulting in very low

efficiencies. Both timing and switching must be programmed correctly. For high efficiency we need low

timing ang high switching frequency.

Table 2: Data for two relevant pump settings.

Pump setting Flow [L/min] Cooling water heating

for 200 W cooling

capacity (@ 100 % duty

cycle) [°C]

Pump power

consumption [W]

M=60 0.88 3.3 34

M=50 0.55 5.2 19

Page 24: System integration prototype with functional test report · prototype with functional test report Lead beneficiary (partner): Karl H. Haugholt / STF Internally reviewed by (name/partner):

UTOFIA 633098

24 of 61

Figure 19: Flow rate vs pump height for System One (solid lines) and System Two (dashed lines).

3.7 Electronics board

To shorten the Odos camera, the rear section of the Odos camera was substituted by a new electronic board

placed over the laser. This board was developed into a "do it all board". This board connects all units,

supply camera and pump with power, includes a µ-controller and buffer the trigger output from the camera.

This board was important to shrink the volume of System Two.

A number of maintenance functions must be performed in order to keep the system operating, especially

with regards to temperature monitoring and stabilization. Furthermore, to allow for remote diagnosis,

various monitoring of the camera and the laser has been implemented.

We have therefore developed a custom PCB that:

- Monitors temperature both of laser and back flange

- Controls cooling motor

- Monitors camera pulse rate

- Enables/disables laser firing

- Controls focus motor

- Measures in-house humidity and temperature

This card also replaces the backmost card of the Odos camera, by including trigger circuits etc necessary

for camera/laser synchronization.

A brief overview of all connection points is given in Figure 20.

Page 25: System integration prototype with functional test report · prototype with functional test report Lead beneficiary (partner): Karl H. Haugholt / STF Internally reviewed by (name/partner):

UTOFIA 633098

25 of 61

Figure 20: PCB overview with description of connections.

3.8 System monitoring

To better monitor and control the underwater unit, a small micro-controller is included inside the housing.

The µ-controller perform the following tasks:

• Monitoring the laser temperature, using a thermistor attached/screwed to the laser housing.

• Monitoring the backlid temperature, using a thermistor attached to the backlid.

• Controlling the cooling motors (both the one used subsea and the one used topside), based on the

laser temperature readings.

• Monitoring internal humidity and temperature levels using a digital thermometer and humidity

sensor.

• Controlling the lens focus using a servo motor.

• Ensuring that the laser is disabled unless the camera is running, by sensing whether the camera

sends trig signals to the laser.

• Transmitting current status to the topside computer, and accepts commands for control.

Page 26: System integration prototype with functional test report · prototype with functional test report Lead beneficiary (partner): Karl H. Haugholt / STF Internally reviewed by (name/partner):

UTOFIA 633098

26 of 61

3.8.1 System monitoring output

The system reports status and accepts commands sent over the RS485 link. In System One, a RS485-to-

TTL-to-TCP/IP converter is included, meaning that the system can be reached through telnetting to

192.168.0.7.

The system continuously reports its status as a plain text message:

PulseMon: value 0 thres 200 sysok 1 pin 22 state 0 canEnable 1

TempMon: land . Lower 330, upper 360, temp 387, duty cycle 1092/8192 forced 0 state 0 tooOld 0 tempB 396 upperB 370 lowerB 400

TempMon: subsea. Lower 325, upper 340, temp 387, duty cycle 1092/8192 forced 0 state 0 tooOld 0 tempB 396 upperB 370 lowerB 400

Humidity: hum 29.10 temp 34.30

Laser temperature: 387, 37.3 C

Back flange temperature: 396, 36.4 C

Heating current: 4, extraheat 0

In short, this communicates the following:

• PulseMon: Filtered signal of pulses from camera to laser. Higher values correspond to higher pulse

rates. The monitor computer does not enable the system enable of the laser before a sufficient pulse

frequency is reached. This is controlled through the threshold.

• TempMon – land/subsea. "Temp" indicates current measured temperature. Lower/upper indicates

thresholds to engage either the subsea cooling motor or the land-based cooling motor. The temperature

sensor is thermistor based, meaning that lower values are warmer. Upper threshold thus indicates

when no motor action is desired, lower when full motor action is desired. The motor is regulated

through a constant speed, but with a varying duty cycle. Two temperatures are measured: Temp (laser)

and tempB (backlid). Separate temperature thresholds are applied to these temperature readings to

calculate a per-sensor duty cycle to be applied to the motor. Finally, the highest duty cycle is selected

to be applied to the motor.

• Humidity: This reports internal humidity and temperature inside the unit

• Laser temperature/back flange temperature: Reports raw thermistor values and temperature of back

flange.

• Heating current: Reports heating current applied to back flange.

While the output is indeed human readable, a separate user interface is included (section 3.1.1) which

displays values nicer, and provide audible warnings if either of the values exceed predefined thresholds.

3.8.2 Command overview

All commands are of format nnnXX where nnn is a number and X is the command to be issued. Note that

all commands must be preceded by the number, and the command must be duplicated. E.g. "40ff". No

spaces used.

For temperature control in system two, the temperature is measure two places. The first temperature is

measured within the laser, the second in the back lid. The system allows separate limits to be specified for

the two measurements, and the one resulting in the highest duty cycle will be used for the motor. In the

command set, the lower-case command will set the limits for the laser temperature, the upper-case

command for the backlid temperature.

Table 3 provides an overview of available commands.

Most settings are stored in EEPROM when set.

Page 27: System integration prototype with functional test report · prototype with functional test report Lead beneficiary (partner): Karl H. Haugholt / STF Internally reviewed by (name/partner):

UTOFIA 633098

27 of 61

Table 3: Command overview for system monitor

Command Value range (default) Comment

c 0 Starts Electronic Speed Controller programming

for the cooling motor controller. This switches

first off power to the controller, then powers it

on, sets speed to maximum, and sets speed to

minimum when the user desires. Then cuts

power to the the ESC, and starts it over.

This sequence allow for low-level programming

of the motor controller.

d 0-8192 Directly sets the time in milliseconds the motor

should be switched on.

e 0-1 Turns on/off laser. Uses enable signal to laser.

f 30-180 Adjusts focus

h 0-255 Adds extra heatup to the laser. Values > 0 makes

the laser start faster.

n 0 Checks link (nop)

p 0-1023 Analog limit for checking pulse emission from

laser

m 0-180 Sets motor speed of subsea motor. Should be

max 50 for 70 meters cable.

l

L

0-1023 Sets lower temperature limit for land motor

u

U

0-1023 Sets upper temperature limit for land motor

s

S

0-1023 Sets lower temperature limit for subsea motor

v

V

0-1023 Sets upper temperature limit for subsea motor

o 0-8192 Sets minimum time for the cooling motor to be

on or off. This ensures that duty cycles remain

reasonable.

d 0-8192 Sets duty cycle in millis for cooling motor. If 0,

automatic control (temperature based) will be

used. Primarily used for testing.

q 0-1 Whether to query the laser for status

information. This interferes with normal

operation, only in use for diagnostic purposes.

Page 28: System integration prototype with functional test report · prototype with functional test report Lead beneficiary (partner): Karl H. Haugholt / STF Internally reviewed by (name/partner):

UTOFIA 633098

28 of 61

3.8.3 System connection diagram

Table 4 shows which pin that is used for each function according to the pinout of Figure 12, which shows

a stock Arduino Micro. The connections are also illustrated in Figure 13.

Table 4: Connection list for Arduino system.

Function Pin

number

Comment

Analog temperature of

laser

A11 Measures temperature of laser. Analog 0-5V input.

Digital temperature of

laser over RS232

D7 (RX),

A2 (TX)

Only used for diagnostic purposes. TTL signals.

System enable (laser) –

output

A3 Turns laser on/off. Digital output.

System ok – input A4 Tells whether laser is ready or not. Digital input.

Laser pulse sniffer –

input

1 Measures whether camera emits pulses. Analog 0-5V input.

Focus motor 11 Pulse train controlling servo motor position. Digital output.

Propeller motor

controller - signal

5 Pulse train controlling motor speed. Digital output.

Propeller motor

controller – power

6 Turns power to motor controller on/off. Digital output.

Backlid temperature A0 Measures temperature of backlid. Analog 0-5V input.

Humidity measurement 4 Measures temperature/humidity within system (close to camera).

Digital 0-5V input.

Dry environment

cooling motor

A5 Turns on/off external cooling motor. Digital 0-5V input.

Surface comms RX/TX TTL signalling. Is converted to RS485 onboard.

Heater – control signal 9 Analog output 0-5V. Gradually increases power applied to heat

element.

Heater – measurement A8 Analog input 0-5V. Measures current applied to heating element.

3.9 Cabling and data transmission

For power transmission, the resistance in the cable causes significant voltage drop. For the 70 m cable, the

resistance is around 2 Ohm (return resistance). In order to provide a supply voltage in the housing of 20 –

28 V while drawing the required current, the topside box supply voltage must be dimensioned accordingly.

System One, using the 70 m cable, was supplied with 36 or 40 V (AZTI topside box or SINTEF topside

box, respectively). The housing is equipped with a voltage limiting power transistor, ensuring that the

voltage in the housing never exceeds 28 V. When the system does not run, the power transistor opens, and

the current will increase until resistive losses in the cables cause the voltage to drop to 28 V. For the case

of 36 V supply voltage and 70 m cable, 4 A of current (and hence 112 W) will be dissipated in the power

transistor, causing moderate heating of power transistor and back flange.

For System Two, a 30 m cable was manufactured, with an estimated return resistance of 1 Ohm. In the

situation described above, twice the current, and hence twice the power would be dissipated in the power

transistor when the system is not running. While the transistor is rated to tolerate even higher power

dissipation, this would require near ideal heat transfer to the back flange. This was regarded as a sub-

optimal solution. For this reason, an extra resistor has been manufactured. This resistor is connected in

Page 29: System integration prototype with functional test report · prototype with functional test report Lead beneficiary (partner): Karl H. Haugholt / STF Internally reviewed by (name/partner):

UTOFIA 633098

29 of 61

series with the power wires of the 30 m cable, in order to provide similar resistive voltage drop. The resistor

is connected by banana plugs on the front panel of the topside box (see Figure 21). When using the 70 m

cable, the banana plugs shall be short-circuited.

Figure 21: System Two power and data cabling. Right: Cable entering the housing. Left: Cable at

topside box. Also shown are extra resistor for 30 m cable (metal block) and jumper for 70 m cable

(red wire).

3.10 Top-side box with power and communication electronics

The topside box used for System 2 is the same box that AZTI constructed for System 1. Main

components are:

· Mean Well HLG-600H-36A

· MiniLine Switch Mode DIN Rail Panel Mount Power Supply, 15W, 5V dc/ 3A

· Harting 5 RJ45 port DIN Rail Mount Ethernet Switch, 10 Mbit/s, 100 Mbit/s, 1000 Mbit/s

· MULTIMETER MURATA DMR20-10-DCM

·

· ZyXEL PoE12-HP PoE injector, supplying the camera.

· Linear Technology RS485-TTL converter (Interface IC RS422/485 DIL-8, LTC490CN8#PBF)

· TTL-to-Ethernet converter, USR-TCP232-T, Jinan USR IOT Technologies.

The inside of the box is shown in Figure 22. Several modifications have been made to the box.

The mounting of the components has been strengthened, but at the cost of penetrating holes, such that the

box is no longer IP-rated.

Connectors for serial resistance have been added (see ch. 3.9)

In system 2, PoE is no longer used. The PoE injector (black box in Figure 22 (right)) is thus not connected.

The PoE injector and the communication electronics PCBs on top of the PoE injector are mounted by strips

and double sided adhesive tape. Considering transport ruggedness, this is a sub-optimal solution. The

components inside the box should be checked for integrity and loose wires after transportation, until a

more permanent mounting has been constructed.

Page 30: System integration prototype with functional test report · prototype with functional test report Lead beneficiary (partner): Karl H. Haugholt / STF Internally reviewed by (name/partner):

UTOFIA 633098

30 of 61

Figure 22: System Two topside box. Left: components in the topside box. Right: PoE injector and

communications PCBs.

3.11 User interface GUI

A graphical user interface (GUI) has been developed to facilitate testing of the camera system in relevant

use cases. A screenshot of the GUI can be seen in Figure 23. The GUI is Matlab-based and includes the

following visualization modes:

1. Range-gated depth mode:

a. Show a depth image associated with a range gated sweep.

b. Show intensity image gated at a specific distance (usually determined by the depth image).

2. Low-light intensity mode:

a. Show an intensity image gated at a user-defined distance (allows for more averaging and

enhanced contrast).

The GUI also provides the ability to control all facets of image and sweep acquisition by adjusting:

1. User-operated range-gating by adjusting the distance and width of the range to be imaged.

2. Image resolution.

3. Binning of intensity and depth image

4. Number of image accumulations

In addition, the GUI includes a number of image processing algorithms that can be applied to enhance the

image contrast:

1. Histogram equalization

2. Colorize intensity image with respect to depth.

3. Noise filtering.

4. Contrast adjustment.

Page 31: System integration prototype with functional test report · prototype with functional test report Lead beneficiary (partner): Karl H. Haugholt / STF Internally reviewed by (name/partner):

UTOFIA 633098

31 of 61

The visualization modes that are included in the GUI as of yet, allow us to test the suitability of the

system for different use-cases such as:

• Sea-floor monitoring: Important to detect objects on the sea-floor which may be covered in mud

which is not easily seen in an intensity image, but which can be seen in the depth image.

• Bio-mass estimation: Depth image can be used to estimate e.g. fish size in an effective and accurate

way.

• Early-warning/range-extension: In many use-cases it is important to see as far as possible, e.g. for

early obstacle detection. The GUI includes both standard contrast enhancement techniques as well as

advanced techniques for combining the whole range gated sweep into a high-contrast image to extend

the practical viewing range of the camera system.

Figure 23: Screenshot of the GUI. Notice that we have applied a color-overlay to the intensity image to

enhance the visual contrast.

3.11.1 Fileformat

The GUI also facilitates saving the raw image data that is acquired directly to file such that it can be

replayed and processed at a later time. A NETCDF1 data file is used to store the raw range gated data.

Figure 24 summarizes the raw sweep data that is stored. A new element is added to each of these

variables for each frame that is acquired. Figure 25 summarizes the attributes of the sweep that is stored

in the netcdf file. The attributes define such parameters as how many acquisition was acquired, the

distance

1 https://www.unidata.ucar.edu/software/netcdf/

Page 32: System integration prototype with functional test report · prototype with functional test report Lead beneficiary (partner): Karl H. Haugholt / STF Internally reviewed by (name/partner):

UTOFIA 633098

32 of 61

Figure 24: Raw data variables that are stored in the NETCDF file.

Figure 25: Attributes of the range gated data.

In the following we summarize the most important variables, and the relevant attributes that define e.g. at

what distance the data was acquired at.

PeakPosition – Distance to the detected peak

A 3-dimensional variable with dimensions (ImageHeight/2^UtofiaRangeBin x

ImageWidth/2^UtofiaRangeBin x numberOfFrames) which contains the distance to the detected peak in

delay steps (SequencerDelayIncrements). The distance in meter can be estimated as follows:

Page 33: System integration prototype with functional test report · prototype with functional test report Lead beneficiary (partner): Karl H. Haugholt / STF Internally reviewed by (name/partner):

UTOFIA 633098

33 of 61

PeakPositionCm = (peakPosition*SequencerDelayIncrement*1.67+SequencerChannelDelayChannel2-

CameraPos)*0.11m/ns .

The data is stored with a fixed point datatype (10 fractional bits) and should be converted with

Ebus_ConvertTuple() function which can be found in the project code repository.

PeakWidth – Width of the detected peak

A 3-dimensional variable with dimensions (ImageHeight/2^UtofiaRangeBin x

ImageWidth/2^UtofiaRangeBin x numberOfFrames) which determines the width of the detected peak.

This variable can be used to determine whether a peak is a result of forwardscatter or not. The data is

stored with a halfprecision datatype and should be converted with Ebus_ConvertTuple() function which

can be found in the project code repository.

PeakHeight – Height of the detected peak / confidence measure

A 3-dimensional variable with dimensions (ImageHeight/2^UtofiaRangeBin x

ImageWidth/2^UtofiaRangeBin x numberOfFrames) which determines the height of the detected peak.

The height of the detected peak can be viewed as a confidence measure. The data is stored with a

halfprecision datatype and should be converted with Ebus_ConvertTuple() function which can be found

in the project code repository.

PeakIntensity – Image intensity at the detected peak

A 3-dimensional variable with dimensions (ImageHeight/2^UtofiaRangeBin x

ImageWidth/2^UtofiaRangeBin x numberOfFrames) which determines the image intensity at the

detected peak. The raw peak position should be adjusted for pixel position and intensity variations. This

can be done as follows:

PeakPositionCalibrated = PeakPosition – PeakIntensity*k+h, where the k and h variables can be found in

the UTOFIA project code repository.

intensityTo / intensityFrom – Determines the range of images that were averaged

One dimensional variables which determine the range of images from the full sweep that were

accumulated to create the Intensity image. NumIntensityAvg determines how large this range should be.

Intensity - Intensity image

A 3-dimensional variable with dimensions (ImageHeight/2^UtofiaIntensityBin x

ImageWidth/2^UtofiaIntensityBin x numberOfFrames) which contains the intensity images that were

acquired. The image data should be divided by the number of frames that were accumulated as defined

by (intensityTo-intensityFrom+1) before subtracting the Bgnd image. The attribute

SequencerDelayRepetitionCount+1 determines how many exposures were averaged at each distance.

Bgnd – Background image data

A 3-dimensional variable with dimensions (ImageHeight/2^UtofiaSweepBackgroundBin x

ImageWidth/2^UtofiaSweepBackgroundBin x numberOfFrames) which contains the background images

that were acquired. The background image is acquired at a long distance from the camera (the distance

can be computed by (CameraPos – SequencerBackgroundDelayChannel2)*0.11m/ns) and should be

subtracted from the intensity image to remove the black level and sensor artifacts from the image.

FgndSweep – Full sweep data

A 4-dimensional variable with dimensions (ImageHeight/2^UtofiaSweepForegroundBin x

ImageWidth/2^UtofiaSweepForegroundBin x SequencerDelayIncrementCount x numberOfFrames)

which contains the full sweep data. The data should needs to be converted by Ebus_ConvertTuple()

which can be found in the project repository.

Page 34: System integration prototype with functional test report · prototype with functional test report Lead beneficiary (partner): Karl H. Haugholt / STF Internally reviewed by (name/partner):

UTOFIA 633098

34 of 61

3.11.2 Monitoring interface

The UTOFIA status monitor provides some level of monitoring of the system.

Launching it should provide the display shown in Figure 26 – albeit during the first ten-fifteen seconds or

so some indicators may be red or yellow:

Figure 26: Screenshot of Utofia status monitor

The values mean:

1. Laser temperature: Temperature of laser (lower value is higher temperature). If this exceeds

threshold (normally 320), the cooling has failed and the unit must be switched off.

2. Pulse monitor: Rate of trigger pulses to the laser. If this fails, this means that the camera has

trouble or is stopped – no worries.

3. Motor duty cycle: Indicates relative amount of cooling applied. If this exceeds threshold

(normally 99), the cooling has failed and the unit must be switched off.

4. Ambient humidity: Provides measurement of in-house humidity. If this exceeds threshold (and

this is not due to e.g. very cold environments), this is an early leak detection. Get the unit out of

the water and perform an inspection before considering whether to continue.

5. Ambient temperature: Provides measurement of in-house temperature. If this exceeds threshold

(normally 40), the cooling has failed and the unit must be switched off.

3.11.3 Synchronizing with external cameras

In order to synchronize with external cameras, the system can be instructed to flash the laser a number of

times. This is done through entering the number of flashes, a comment of what happens, and a filename

where this information is logged. When pressing send, the laser will flash, and the information is logged.

The flash of the laser is usually easily seen in both UTOFIA and other cameras, making synchronization

trivial.

Page 35: System integration prototype with functional test report · prototype with functional test report Lead beneficiary (partner): Karl H. Haugholt / STF Internally reviewed by (name/partner):

UTOFIA 633098

35 of 61

4 Functional test report

System Two has a new firmware that includes numbers of new features. Along with the firmware also

the PC software interface and visualization is further developed. The technical performance of the

system, like resolution, frame rate and image enhancement is given at the end of this chapter.

System Two is tested in waters with two different attenuation lengths, 1,5 m and 3,5m. (Light will be

attenuated to 37% after 1 m if the attenuation length is 1 m. Or to 13,5% for 1m back and forth.) We

have measured range and 3D performance systematically by imaging a target at several distances. We

have also tested Utofia for more general use.

In the test performed, we have used a high quality standard camera for comparison. Comparison with

other systems is not so easy because there are so many possibilities. Utofia is made to reduce the

negative influence of backscatter when imaging under water. So, it will have its strongest point when a

single unit is favourable and artificial illumination must be used. We believe that no single unit can

compete with Utofia. The tests show that Utofia eliminates back scatter and the range performance is in

the middle of the expected range for a gated system. Depending on field of view and power the expected

range for a gated system is 3 – 7 attenuation lengths. In the tests Utofia has showed a range around 4,5

attenuation lengths. At this distance the water attenuation is around 8000 times.

To compete with Utofia other system must reduce backscatter by separating the light source from the

camera. How much separation and how much power will influence the result. In section (4.? fixme). the

performance versus other systems are discussed.

In general, we could say that Utofia is limited by the signal to noise in the image while other imaging

system is limited by contrast and high dynamic range in the images. Scattering from larger particles in

the foreground is difficult to eliminate with image enhancement techniques. This will give irritating

"visual noise" in the picture for a standard camera system. When Utofia has a signal to noise better that

4, the images are very pleasant. All artefacts from the foreground is eliminated. Utofia will have the same

performance night and day while other system will greatly benefit from ambient light.

The 3D capability is a unique feature for Utofia. Echo sounders and some optical systems will give 3D

information. A short comparison with echo sounders is given in… Optical systems are based on

triangulation and has limited range. Utofia will give useful 3D information at distances over 10 meters at

10Hz if the water quality allows. Depth information makes it easier to understand what is seen in the

image.

The following table summarizes the comparative performance of the UTOFIA system and a normal

camera. As water quality greatly influences performance, we have normalized the table according to

attenuation lengths which provide an metric for visibility.

Aspect UTOFIA Reference camera

Imaging range 4.5 attenuation lengths 3.5 – 4 attenuation lengths outside backscatter

region.

2-2.5 in the backscatter region.

Image quality Good contrast.

Eliminates backscatter.

Limited by signal to

noise

Good signal to noise.

Less contrast at longer distances

Picks up small particles in the foreground which

are difficult to filter out. This greatly degrades the

visual quality

Page 36: System integration prototype with functional test report · prototype with functional test report Lead beneficiary (partner): Karl H. Haugholt / STF Internally reviewed by (name/partner):

UTOFIA 633098

36 of 61

4.1 System Two compared to a high-quality standard camera

We have tested System Two in two places, Oslo and Matre north of Bergen, both places daytime and nigh

time. For comparison, we mounted a reference camera on Utofia housing. It is mounted on the opposite

side of the laser to reduce backscatter. The distance from laser to reference camera is around 0.2 m.

The setup is illustrated in Figure 27.

Figure 27: Test setup with Utofia and reference camera. The reference camera is a monochrome

IDS camera model IDS UI-5240CP-M.

Page 37: System integration prototype with functional test report · prototype with functional test report Lead beneficiary (partner): Karl H. Haugholt / STF Internally reviewed by (name/partner):

UTOFIA 633098

37 of 61

4.1.1 Reference camera

For comparison, a monochrome reference camera was used. We chose to use an IDS UI-5240CP-M

camera, which has 70% quantum efficiency in the relevant range. A 8mm focal length lens with f/# 1.4

from Computar is used. This should be a sensitive system but the LUX-rating is not clear. We believe it

would have been rated to better than 0.1 LUX. Datasheet for the camera is given in Figure 28.

Figure 28: Datasheet for reference camera.

Page 38: System integration prototype with functional test report · prototype with functional test report Lead beneficiary (partner): Karl H. Haugholt / STF Internally reviewed by (name/partner):

UTOFIA 633098

38 of 61

4.1.2 Range extension

From the systematic tests with a target, later presented in section 4.4,we conclude that the range for Utofia

is around 4.5 attenuation lengths, while the IDS camera the range is 3,5 – 4 attenuation lengths. This agrees

with earlier qualitative tests where we have seen that Utofia has around 20% longer range than a standard

camera.

An increased range from 4.5 to 5.5 attenuation lengths will roughly need 10 times more signal to noise per

pixel. A possibility is to reduce the field of view. For some applications, a narrow stripe illumination could

be a solution. That can be obtained by removing the external diffusor. By narrowing the illumination from

350 pixels high to 70 pixels high, each illuminated pixel will have 5 times more signal. This could be

combined with 4 times more averaging. That means lower frame rate or shorter scan range. For some

applications, it should be possible to reach 5.5 attenuation lengths.

At 10 Hz frame rate and 1kHz laser repetition frequency 100 laser pulses are emitted for each frame.

Usually 4 pulses for each distance and 4 distances are combined to one intensity image. That is only 16

pulses of 100. That is the case in Ch… for the target imaging. For some applications, there might be a

potential to use more of the pulses to make a better image. That is described in chapter … low light mode.

The potential is a factor two better signal to noise.

Figure 29 was included in the proposal, it shows expected range for different vision systems under water.

According to this figure, Utofia is in the middle of the expected range. The IDS camera however is better

than expected. Clearly new sensors have a much better dynamic range than before. In the target tests the

target was at the border of the backscatter region. The situation has similarities to the "sync scan" in Figure

29 where longer range is expected.

Figure 29: Expected range for underwater vision systems.

In addition to systematic tests with a target we have recorded images from more practical use. Figure 30

shows a picture from the sea bottom in Oslo. A box was placed on the bottom half a year earlier. The box

is now cowered with mud and has little contrast to the bottom. The Utofia image shows the box at 3.8 m

(2,5 attenuation lengths). In the IDS camera, it is very hard to see box. The weak structure of the sea

bottom can be seen in the Utofia image but is severely degraded by backscatter in the IDS image. This

image clearly shows the benefits of UTOFIA – eliminating backscatter, and providing visual 3D

information simultaneously.

Page 39: System integration prototype with functional test report · prototype with functional test report Lead beneficiary (partner): Karl H. Haugholt / STF Internally reviewed by (name/partner):

UTOFIA 633098

39 of 61

Figure 30:IDS and Utofia image of box and target plate at the sea bottom. The distance to the box is 3.8 (2,5

attenuation lengths).

Figure 31: Image from the tapered bottom of a fish cage. The net is going down in a long taper with a "

garbage disposal system " in the bottom. The colour map chart is shown to the left, red is 6m and

dark blue is 12m.

Another example, shown in Figure 31, is from the bottom of a fish cage at Matre. We are looking down a

tapered section of the cage where garbage, food and dead fishes accumulate.

At the bottom of the tapered section there is a "garbage disposal system" with a tube going up to the

surface. Due to the 3D colour overlay one can understand that the blue part is at 11 m range and the

reddish part at 6 m. There is a large difference between the images by Utofia and the IDS camera. The

IDS image is mostly covered with backscatter and only a fraction of the image contains information.

Even though the image is taken at day time there is no information outside the laser beam. The only

useful part of the image is on the edge of the illumination. This shows that the conclusion from the target

tests in section 4.4 might have been too favourable for the IDS camera since the target is just at the

border of the illumination.

On the other hand, image enhancement could possibly improve the IDS image in some situations. In this

project, we could not spend too much time on optimising other camera systems. As can be seen in the

IDS image there is a lot of scattering particles and not a monotone backscatter in this case. Scattering

particles is difficult to remove because they are similar to features in the picture.

One might argue that the IDS camera is outside its useful range here. The IDS image is not limited by the

signal to noise but by the contrast, so a more sensitive camera or stronger illumination would not have

helped here. Separating the light source further would improve some parts of the image. At 11m distance,

even a separation of 1m will be small compared to the distance. There will be a parts of the image with

large overlap between illumination and line of sight.

In summary, we believe that the IDS images are representative of what can be achived with a regular

camera, and the benefits of UTOFIA can clearly be seen in these images.

Page 40: System integration prototype with functional test report · prototype with functional test report Lead beneficiary (partner): Karl H. Haugholt / STF Internally reviewed by (name/partner):

UTOFIA 633098

40 of 61

4.2 Comparison with echosounder

Echosounders are often used in fish cages to track and observe fish. An example of an image from a

Didson echosounder can be seen in . The advantage of an echosounder is that they in general have a long

range, but a limited lateral resolution because they use a lobe (e.g. 0.4x30 degrees) which they scan back

and forth along the normal direction to the lobe. The Utofia system on the other hand has a very high

lateral resolution and depth resolution, but with a shorter range than an echosounder. The high resolution

of the UTOFIA data allows for more detailed analysis of fish with regards to size and behaviour.

While we have not made a direct comparison between UTOFIA and a stereo system, we would expect a

relevant stereo system to have less range and precision in comparable scenarios than UTOFIA.

Figure 32: Top left image shows an intensity image of fish acquired from near the bottom of the

fish cage looking upwards. Top right image shows the depth map of segmented fish. Lower image

shows a 3D plot of fish. Notice the high lateral (spatial) resolution of the data which allow for more

detailed fish analysis than data which an echosounder such as Didson can provide (see Figure 33).

Page 41: System integration prototype with functional test report · prototype with functional test report Lead beneficiary (partner): Karl H. Haugholt / STF Internally reviewed by (name/partner):

UTOFIA 633098

41 of 61

Figure 33: Image from Didson echosounder of fish in fish cage.

Page 42: System integration prototype with functional test report · prototype with functional test report Lead beneficiary (partner): Karl H. Haugholt / STF Internally reviewed by (name/partner):

UTOFIA 633098

42 of 61

4.3 UTOFIA for fish imaging

The UTOFIA camera was tested for fish imaging in a fish farm on the west coast of Norway (Matre). An

example image of fish swimming in a cage can be seen in Figure 34. The fish cage contained approximately

10 000 fish with an average weight of 2kg and 50cm length. In this section we present results from this

field test and show some examples of aggregate information which can be estimated from the images

acquired by the UTOFIA system.

Figure 34: Image from UTOFIA of salmon fishes. False color indicates distance to camera.

Page 43: System integration prototype with functional test report · prototype with functional test report Lead beneficiary (partner): Karl H. Haugholt / STF Internally reviewed by (name/partner):

UTOFIA 633098

43 of 61

4.3.1 Fish length

Without depth information, it is impossible to estimate the length/size of fish because of the perspective

effect. It is impossible to determine whether a fish which has a small extent in pixels in an image is a small

fish which is close to the camera or a larger fish which is far from the camera. However, with depth

information we can determine the distance from the camera and thereby adjust for the perspective effect.

In Figure 35 we show the two images where we estimate the length of two fish – even though a fish seems

smaller in extent the measurements shows that the smaller fish is larger than the seemingly larger fish.

Figure 35: Estimation of length of fish. In the upper image, we have estimated that the length of a

fish is 46 cm, while another fish, which seemingly seems smaller in the lower image, is estimated to

be 55cm in length. This is the advantage of the depth information that we can adjust for

perspective differences due to distance and thereby accurately estimate fish size.

Page 44: System integration prototype with functional test report · prototype with functional test report Lead beneficiary (partner): Karl H. Haugholt / STF Internally reviewed by (name/partner):

UTOFIA 633098

44 of 61

4.3.2 Fish swimming speed

By tracking fish over time we can estimate their swimming speed. In Figure 36 we show a school of fish

and visualize the estimated swimming direction and speed of some of the fish. The depth information

together with the perspective view transform, and temporal tracking of fish, is used to estimate a

distribution of swimming speeds for the school of fish. In this snapshot of a longer video acquired at 10Hz,

we found that the speed of the fish was 0.60m/s with a standard deviation of 0.27ms/s.

Figure 36: Estimation of swimming speed and direction. By tracking fish over time we can estimate

their swimming speed. The right image shows detected fish (yellow) and the red arrows shows the

estimated swimming direction and the relative movement in pixels of some of the detected fish. By

using the depth information (left image) and the perspective view transform, the movement in

pixels can be transformed to a movement in a Euclidean space. The middle image shows the

intensity image of the fish and the movement arrows of some of the fish. The estimated speed of the

all the fish is an average of 0.60m/s, with a standard deviation of 0.27m/s.

4.4 Sea tests with target

A more systematic test of range and 3D precision has been done in two water conditions, in Oslo harbour

and at a fish farm at Matre north of Bergen. Tests has been done both daytime and night time. Utofia is

an active imaging system having a built-in illumination. We wanted to show that it operates in the day as

well as in the night. During daytime in shallow water a sensitive camera will take good images without

illumination. During night, we used the Utofia illumination as the light source for the reference camera.

The laser illumination has a well-defined lobe giving a well-defined backscatter region.

The Utofia housing with the IDS reference camera mounted (Figure 27) was lowered in a rope looking

directly downwards. A target, shown in Figure 37, was lowered under the housing. The distance between

target and housing was changed in steps to cover situations with good signals and low signals.

The water attenuation is a critical parameter when it comes to range for an optical vision system. The

water quality was measured with a light source mounted on a rod together with the reference camera. The

set-up is shown in Figure 37. Distance between Lightsource and camera is 0.95m. By measuring the

intensity and comparing it to the intensity measured in air the attenuation in water is estimated. The light

source is modulated to eliminate ambient light. When the water measurement is done, the reference

camera is mounted onto the Utofia housing.

Page 45: System integration prototype with functional test report · prototype with functional test report Lead beneficiary (partner): Karl H. Haugholt / STF Internally reviewed by (name/partner):

UTOFIA 633098

45 of 61

Figure 37: The target (left) is lowered in a rope under the camera and is used for measuring signal

and contrast. Approximate size 1.0x0.3 m. Right: Attenuation length measurement system. The

reference camera is used for water attenuation measurement when mounted together with a

modulated LED light source.

Page 46: System integration prototype with functional test report · prototype with functional test report Lead beneficiary (partner): Karl H. Haugholt / STF Internally reviewed by (name/partner):

UTOFIA 633098

46 of 61

4.4.1 Tests at Dronningen in Oslo

In Oslo, a day test was done 09.09.2017 and a night test 15.09.2017. The attenuation length was estimated

to 1,5 m from 1 m below the surface to the bottom at 9 m depth on both days. There was a fresh water

layer in the surface. We placed the system below this layer.

Figure 38 shows the test site and the setup. The Utofia system was held 1,5 m below the surface and a

target plate was lowered 5 – 7 meters under the camera.

Figure 38: Tests at Dronningen. Left: The upper part of the image gives an indication of the cloud

cover. It was no sunlight during the day test. The Utofia unit with reference camera was lower

from a floating stage. Right: Picture from early evening showing the test geometry. Housing is held

1-2 m below the surface. The target was lowered directly under the camera. The water depth is

around 9 m at the test place.

4.4.1.1 Day test Oslo

From the day test we have selected 3 different distances from camera to target, 5m, 6.3 m and 7 m.

For Utofia we present 4 types of images:

1. An unfiltered depth image showing the distance estimate for each pixel. Where the signal is too

low the depth image will be noisy.

2. An image showing the depth signal strength is also given. This image could be used to set a noise

floor for the depth image to avoid invalid data. E.g. at 7m range we are close to the bottom and

we pick up a weak signal from the bottom. We cannot resolve the bottom but we can detect that

it's there.

3. An intensity image that is an average of 4 delay steps in front of the target, which provides a visual

image of the object. A running average of a background image is subtracted from the intensity

image to improve the quality.

4. One image combining intensity and depth. The depth value and strength is used to colour the

intensity image to enhance the contrasts.

An example of all 4 images is given in Figure 39. In the rest of the images (Figure 40-Figure 42) only

intensity and the colour coded intensity image is given.

Page 47: System integration prototype with functional test report · prototype with functional test report Lead beneficiary (partner): Karl H. Haugholt / STF Internally reviewed by (name/partner):

UTOFIA 633098

47 of 61

Figure 39: Utofia result for 5 m range and 4 averages per delay step. 3D image, upper left, shows a

good signal from the target. Upper right; black and white image. Lower right; same as upper right

but with colour coded depth. There are some small fishes, 2cm long, in the foreground that is gated

away. They appear as dark spots.

From the IDS reference camera, we present two images for each distance:

1. One image with automatic exposure and gain.

2. One image where exposure/gain has been hand-optimized. In practice, this will not be possible,

but shows the "best" exposure that a regular camera could provide in such circumstances.

We have calculated contrast and SNR metrics for the UTOFIA and IDS images, these are provided together

with the images in the illustrations. From our point of view, these numbers do not however well reflect

overall image quality in either of the systems.

As we see it, the images show that:

• UTOFIA and IDS has a similar range in daylight and water qualities.

• UTOFIA removes backscatter from the images and does not require hand-tuning of contrast

Still, if there is sufficient amount of ambient light, the primary benefit of UTOFIA seems to be the

continuous availability of 3D information.

Page 48: System integration prototype with functional test report · prototype with functional test report Lead beneficiary (partner): Karl H. Haugholt / STF Internally reviewed by (name/partner):

UTOFIA 633098

48 of 61

Figure 40: UTOFIA (left) and IDS images (right) for 5 m range. Top left: UTOFIA intensity image.

Bottom left: UTOFIA intensity image with depth overlay. Top right: IDS image with automatic

contrast. Bottom right: Hand-optimized contrast for IDS camera.

Figure 41: UTOFIA (left) and IDS images (right) for 5 m range. Top left: UTOFIA intensity image.

Bottom left: UTOFIA intensity image with depth overlay. Top right: IDS image with automatic

contrast. Bottom right: Hand-optimized contrast for IDS camera. Utofia uses 8 repetitions for

each gating in this picture. Back scatter from the laser is visible in the IDS camera even in

daylight. The laser illumination on the target is insignificant compared to the day light on the

target. The colour coded image, bottom left, shows that target is detected by the 3D algorithm.

Page 49: System integration prototype with functional test report · prototype with functional test report Lead beneficiary (partner): Karl H. Haugholt / STF Internally reviewed by (name/partner):

UTOFIA 633098

49 of 61

Figure 42: UTOFIA (left) and IDS images (right) for 7 m range. Top left: UTOFIA intensity image.

Bottom left: UTOFIA intensity image with depth overlay. Top right: IDS image with automatic

contrast. Bottom right: Hand-optimized contrast for IDS camera. UTOFIA detects the bottom

through providing 3D data for most of the scene.

4.4.1.2 Night test in Oslo

From the day test we have selected 2 different distances from camera to target, 5.5m and 6.5m. The

condition was slightly changed for the night test. The camera was lowered more to avoid a freshwater

layer. 6.5m from the camera was close to the bottom.

Similar to the daylight results, we present two images for UTOFIA.

• One intensity image that is an average of 4 delay steps in front of the target. A running average of a

background image is subtracted from the intensity image to improve image quality

• One image combining intensity and depth. The depth value and strength is used to colour the intensity

image to enhance the contrasts. From the colour in the combined image information of the depth signal

can be extracted. A strong colour means a good depth signal. A weak colour means a weak signal.

Noise in the colours, that is red or yellow colours where we expect green or blue, means the depth

signal is invalid.

Utofia uses 4 repetitions for each gating at 5.5m and 8 repetitions at 6.5m.

Similarly, from the IDS reference camera, we present two images for each distance:

1. One image with automatic exposure and gain.

2. One image where exposure/gain has been hand-optimized. In practice, this will not be possible,

but shows the "best" exposure that a regular camera could provide in such circumstances.

Results are shown in Figure 43 and Figure 44. Judging images is a subjective task. In Figure 44 it could

be argued that Utofia catches target and bottom while nothing is visible in the IDS camera. On the other

hand, the IDS camera setup could certainly be improved by more light at a larger distance from the

camera.

Page 50: System integration prototype with functional test report · prototype with functional test report Lead beneficiary (partner): Karl H. Haugholt / STF Internally reviewed by (name/partner):

UTOFIA 633098

50 of 61

Figure 43: UTOFIA (left) and IDS images (right) for 5.5 m range. Top left: UTOFIA intensity

image. Bottom left: UTOFIA intensity image with depth overlay. Top right: IDS image with

automatic contrast. Bottom right: Hand-optimized contrast for IDS camera. The colour coded

image, lower left, shows that target is detected by the 3D algorithm.

Figure 44: UTOFIA (left) and IDS images (right) for 6.5 m range. Top left: UTOFIA intensity

image. Bottom left: UTOFIA intensity image with depth overlay. Top right: IDS image with

automatic contrast. Bottom right: Hand-optimized contrast for IDS camera.

Page 51: System integration prototype with functional test report · prototype with functional test report Lead beneficiary (partner): Karl H. Haugholt / STF Internally reviewed by (name/partner):

UTOFIA 633098

51 of 61

4.4.2 Tests at Matre, western Norway

In search for better water quality, Utofia was brought to the west coast of Norway. The Sea research

institute kindly gave us access to their facility close to Matre north of Bergen (Figure 45). An overview

of the facility is given in Figure 46. Day and night test was done 4th October. The facility has a small

house ideally placed on the floating stage. This gave us shelter from the rain. The topside equipment was

placed inside the house. It was around 30 m from the house to the nearest fish cage. We used the 70m

cable.

We started out by measuring the water attenuation at several depths around 12:00. It was raining and had

been raining a lot the last weeks. It was a thick layer of freshwater and turbid surface water. There was a

strong current out of the fjord modulated by the tide. The current affected the shape of the cages. We

measured attenuation lengths both outside and inside one fish cage. There was no difference in water

quality inside and outside the cage. The result is given in Figure 47. The water quality improved with

depth.

After that we did the target measurement outside the cage. The water depth was around 30m. While

waiting for the darkness we looked at fish inside the cage. We also looked at the bottom of the cage.

After dark, around 20:00, we repeated the target tests.

Figure 45: Map showing location of Matre research farm.

Figure 46: Overview of the test facility at Matre (left). Arrows show where the tests were done.

Topside equipment was placed inside a small house at the floating stage (upper right).

Page 52: System integration prototype with functional test report · prototype with functional test report Lead beneficiary (partner): Karl H. Haugholt / STF Internally reviewed by (name/partner):

UTOFIA 633098

52 of 61

Figure 47: Water attenuation profile at Matre, rather clear water was found under 10 m. No

significant difference was found inside and outside the fish cage.

4.4.2.1 Day test Matre

A reference camera was mounted the same way as the tests in Oslo. Unfortunately, the reference camera

came slightly out of position. During handling, the camera position was rotated around 15 degrees

relative to the Utofia housing. The target plate was lowered to approximately 21m depth. The camera

housing was lowered in 2m steps towards the target. From the day test we have selected 2 different

distances from camera to target, 9.7m and 13.5 m.

For Utofia we present 4 types of images. They are the same images as the day test in Oslo; unfiltered

depth image, depth signal strength, intensity image and one combined depth and intensity image. We

have used 4 exposures for each delay step.

From the IDS reference camera, we present 3 images for each distance. 2 images with laser on and one

with the laser off. We have selected automatic exposure and gain and 10 Hz frame rate. In one image we

have tried to manually optimize the contrast to better see the target. Images from the test are given in

Figure 48 and Figure 49.

At 9.7 m range, Figure 48 and Figure 49, the signal to noise for Utofia is half of the value obtained by the

IDS camera in ambient light. Utofia has no sensitivity to ambient light. A narrow band filter removes

most of the daylight. The rest of the ambient light will be subtracted when the background image is

subtracted. It is impressive that the laser illumination is comparable to the daylight. There are large

variation in the signal to noise and contrast values extracted from the IDS camera. It largely depends on

where in the image the target is. The laser illumination relative to the daylight can also be evaluated from

Figure 48. With the laser on signal to noise level for IDS camera is 61 compared to 36 with laser off.

At 13.5m distance, Figure 50 and Figure 51, the signal to noise for Utofia is 3. This is lower than the

value obtained by the IDS camera, but we have a significant 3D signal. The standard deviation for the

distance is 0.1m on the white part of the target.

Page 53: System integration prototype with functional test report · prototype with functional test report Lead beneficiary (partner): Karl H. Haugholt / STF Internally reviewed by (name/partner):

UTOFIA 633098

53 of 61

Figure 48: IDS camera, 9.7 m distance. In the two left images, the laser is on and contributes to

half the signal from the target. In the image to the right the laser is off. The target can clearly be

seen.

Figure 49: Utofia images at 9.7m distance. The upper left image is the black and white image

recorded with 4 averages. The lower left image shows the 3D picture from the camera. Where

there is no target the camera picks the highest signal out of the noise. The lower right image shows

the 3D signal levels. To the upper right is the black and white image with colour overlay. The

depth gives the colour and the signal level gives the colour saturation.

Figure 50: IDS camera, 13.5m distance. In the two left images, the laser is on and contributes to

more than half the signal from the target. At this range, the camera is more into the turbid surface

water. More backscatter is seen at this depth, 7.5 m vs. 11.4m depth for 9.6m distance. To the

right, the laser is off. The target can clearly be seen.

Page 54: System integration prototype with functional test report · prototype with functional test report Lead beneficiary (partner): Karl H. Haugholt / STF Internally reviewed by (name/partner):

UTOFIA 633098

54 of 61

Figure 51: Utofia images at 13.5 m distance. The upper left image is the black and white image

recorded with 4 averages. The two lower images show the 3D pictures from the camera. To the

upper right is the black and white image with colour overlay.

4.4.2.2 Night test Matre

In the night test, the water quality seems to have become slightly worse. There was a strong tidal current

that might have changed the conditions. We might also have been higher up in the water touching more

into the surface water. We aimed at the same depths as in the day test but there were deviations. The

distances were 10.3 m and 13.9 m. Signal level at 13.9m range was around half of the value at 13.5m in

the day test.

For the night test, the image with no laser is skipped for the reference camera. The rest is the same as in

the day test. The reference camera was orientated better before the night test.

Images from the night test is given in Figure 52 to Figure 53. At 10.3m distance the IDS camera gives

better signal to noise but lower contrast. This is as expected. At 13.9m the target is hardly seen by the

reference camera. It is seen and detected in 3D by Utofia. The standard deviation in the depth estimate is

1.4 m showing that the used settings are on the edge to detect the distance to the target. More binning or

more averages could be used to improve the situation.

Figure 52: IDS camera, 10.3 m range. Original image to the left with auto contrast and gain. To the

right the contrast is increased to better see the target. Target can clearly be seen.

Page 55: System integration prototype with functional test report · prototype with functional test report Lead beneficiary (partner): Karl H. Haugholt / STF Internally reviewed by (name/partner):

UTOFIA 633098

55 of 61

Figure 53: Utofia images at 10.3 m range. The upper left image is the black and white image

recorded with 4 averages. The signal to noise is 1/4 of the value obtained by the IDS camera. Utofia

has the better contrast. The lower left image shows the 3D picture from the camera. To the lower

right is the signal levels the 3D picture is based on. To the upper right is the black and white image

with colour overlay.

Figure 54: IDS camera, 13,9m range. The target can hardly be seen at the end of the rope. The

contrast is increased in the right picture.

Page 56: System integration prototype with functional test report · prototype with functional test report Lead beneficiary (partner): Karl H. Haugholt / STF Internally reviewed by (name/partner):

UTOFIA 633098

56 of 61

Figure 55: Utofia images at 13.9m range. The upper left image is the black and white image

recorded with 4 averages. The lower images show the 3D pictures from the camera. The upper

right picture shows the intensity image with colour overlay from 3D information.

4.5 3D precision depending on range and conditions

The 3D precision that is possible to attain is highly correlated with the signal to noise ratio (SNR). A

white or reflective target far away should give the same accuracy and standard deviation as a dark and

close target. The signal level is dependent on the distance to the target, the albedo of the target as well as

the attenuation length of the water. The noise of the signal is reduced with increasing binning and

accumulations. Figure 56 summarizes the SNR versus depth precision from experiments imaging a target

at different distances and under different water qualities. Notice the trend which shows that with

increasing SNR, the depth precision increases. At Matre we got a signal to noise of 3 at 14 m range and a

standard deviation of 6 cm. With good signal levels at shorter ranges, we approach a depth precision of

1cm.

Figure 56: Depth precision versus SNR. This plot shows results of the depth precision that was

observed when imaging a target at different distances and in different water qualities. We observe

that there is a correlation between SNR and depth precision. The data are noisy because they are

acquired in the field under varying and sometimes challenging conditions, however, in good

conditions we observe a depth precision as low as 1cm.

Page 57: System integration prototype with functional test report · prototype with functional test report Lead beneficiary (partner): Karl H. Haugholt / STF Internally reviewed by (name/partner):

UTOFIA 633098

57 of 61

4.6 Optimizing range, resolution and frame rate

The camera and associated software have been designed to be flexible with regards to different trade-offs

that can be made to customize the acquisition towards different acquisition scenarios. In some scenarios,

we may be photon starved and improving SNR is important, in other scenarios, we are primarily interested

in depth information and not intensity images and vice versa, in other scenarios we may be interested in a

high frame-rate and are willing to sacrifice some SNR. In this section we will discuss some of the specific

trade-off opportunities that the system allow us.

Spatial resolution versus frame rate

The frame rate of the camera is dependent on the region of interest (number of pixels; WxH) that is used.

Figure 57 shows the camera frame rate as a function of the region of interest. The maximum repetition rate

of the laser is 1000Hz. The laser is tuned for and operates most efficiently at 1000Hz. This means that the

image region of interest should be restricted to 0.5M pixels. Higher resolution is possible but at the cost of

a lower frame-rate, e.g. using the full image resolution (1280x1024) will reduce the frame rate to 400. The

default region of interest that were used for experiments was 960x512. Additional constraints on the image

resolution is posed by the firmware to facilitate individual binning of intensity, peak and full sweep; the

width and height have to be factors of 8. Furthermore, the Gigabit Ethernet connection facilitates a

bandwidth of approximately 600Mbit/s, which means that some of the image streams (1 intensity image,

1 background image, 3 peak related images, and 1 full sweep) needs to be binned to not lose frames. At

10Hz update rate, each frame can take 7.5MB. Our default setup is to do no binning on intensity and

background image, 2x2 binning of peak related images and 8x8 binning of the full sweep which results in

a frame having a size of approximately 3MB (pixels are 16-bits).

Figure 57: Frame rate versus image resolution.

Page 58: System integration prototype with functional test report · prototype with functional test report Lead beneficiary (partner): Karl H. Haugholt / STF Internally reviewed by (name/partner):

UTOFIA 633098

58 of 61

Short-range 3D vs Long-range 3D

Assuming we have 100 exposures available for acquiring a sweep, there is a trade-off in how long a range

we should cover with them. A long range can be be covered either with many averages per step and few

steps, or few averages per step and many steps. We have found that if the water quality is relatively poor

(less than 2m attenuation length), the best compromise is to acquire the images with as small step size as

possible (18cm separation) to reduce the effect of backscatter. We also found that a good compromise

between range and SNR is to cover a range of 4m (e.g. 1m-5m) and to use 4 averages per range.

However, if the water quality is better, we are not that affected by backscatter. This means that we can

double the step size between images. This enables UTOFIA to cover twice the range (1m-9m) at the same

frame rate.

We have also added an easily accessible feature which doubles the number of averages to 8 at each range,

however, this reduces the frame rate to 5Hz.

As explained in section 3.4.1, the acquisition can be acquired using interleaved and non-interleaved

sequencing. The trade-off here is that with interleaved, the depth image is more behaved if there are fast

moving objects in the scene, while the intensity image will be more blurred out. With non-interleaved

acquisition, the depth image will be noisier on depth edges while the intensity image will be less blurry.

All of these parameters are adjustable, e.g. the user can easily change the range span, the number of

accumulations that are used if other trade-offs are better for a specific scenario.

Accumulating range image

The firmware allows for accumulating several of the range images into an intensity image to increase SNR.

The trade-off here is that we do not want to include range images gated behind relevant objects in the scene

and we do not want to include range images gated so close to the camera that we integrate up too much

backscatter. By default, we usually average 4 range images, however, this is fully customizable.

Binning

The software allows for separate binning of intensity and the range image. By binning the images, the SNR

is improved at the cost of resolution – e.g. by binning the images 4x4 the SNR is improved by a factor of

4. In some cases, e.g. when imaging objects that are far away or when we receive limited with photons

from the objects, it may be better to trade resolution for SNR. Depth estimates are sensitive to noise so we

have found that it is often more useful to trade SNR for resolution for the depth estimates than the intensity

image. In Figure 15 we show the effect of binning on depth estimates.

Low light mode

In those scenarios when we are photon starved and 3D information is not of interest, e.g. when we are

interested in seeing as far as possible, it is important to improve SNR by accumulating as many exposures

as possible while at the same time ensuring that the frame rate is acceptable. A low-light mode has been

designed specifically for this purpose where all exposures are acquired at one specific range. Assuming

we want to uphold a 10Hz framerate, that means we have available 100 exposures. In 3D mode, these

exposures would have to be spread out over many different ranges, while in low light mode we use 1/4th

of the exposures for the background image and 3/4th for the foreground gated at a user-specified distance.

The distance should be further out than 2m to remove backscatter, and should ideally be gated

approximately 2 meters in front of the objects we are trying to image. The effect of low light mode can be

found in Figure 58, where SNR of the image is dramatically increased (from 2.06 in 3D mode to 5.19 in

low-light mode) because we average more foreground images as well as we have a more up to date

background image which suppresses the fixed pattern noise (vertical stripes).

All of the 75 exposures of the foreground do not necessarily have to be acquired at the same distance.

Further work will include to look into ways of spreading them over a small range to detect if we have

objects further out, within that range or closer to the camera. This can for instance be used to create an

early detection warning system, and to enhance to visualization by coarsely colour coding the intensity

image with respect to depth.

Page 59: System integration prototype with functional test report · prototype with functional test report Lead beneficiary (partner): Karl H. Haugholt / STF Internally reviewed by (name/partner):

UTOFIA 633098

59 of 61

When we are imaging objects far away, the full resolution of the image may not be necessary, and binning

of the pixels can improve the image quality and SNR even more. Interleaved acquisition should be used to

improve the background estimation and reduce the impact of FPN.

Figure 58: Left image is captured in low-light mode with 64 accumulations in the foreground and

16 for the background. The right image is acquired in 3D mode with 8 accumulations per range.

Notice that the SNR of the image captured in low light mode is approximately 2.8 times as high as

for the image in 3D mode (5.19 in low light and 2.06 in 3D mode). Notice also that the FPN (vertical

stripes) is drastically reduced in the low-light mode because we have a more up-to-date

background.

Page 60: System integration prototype with functional test report · prototype with functional test report Lead beneficiary (partner): Karl H. Haugholt / STF Internally reviewed by (name/partner):

UTOFIA 633098

60 of 61

A Appendix - Remaining risk overview

Table 5 summarizes the risks originally identified in D1.1. Whilst most issues have been cleared, the

following risks remain fully or partly unresolved:

• The system is now power/sensitivity limited in range. This makes our capability of getting good

images of distant objects limited.

• The third alternative sensor part – the 7 um sensor – was not completed. This third sensor option

was not originally part of the project, and was thus a low-priority task. After initial investigations

showed that the effort required to get high performance out of the sensor would be significantly

larger than originally envisioned, we put the development of this sensor on hold.

Beyond these two risks, the remaining risks have been handled.

Table 5: Identified risks for System Two from D1.1. Green indicates risk cleared, yellow risk still

present for System Two. More details of each individual risk can be found D1.2.

# Risk Description 1 2

1 ToF sensor/pixel response time

too slow

Time response is OK. We achieve good depth

resolution.

2 ToF sensor/pixel sensitivity too

low

Problems in handling low intensities (long ranges)

3 Lower than 2 mJ pulse power

available in laser pulse

Laser pulse energy > 2 mJ >2 >3

4 Available volume for laser is too

small

No

5 Too much noise induced in

sensor due to poor EMC

Image data have some FPN but probably not due to

EMC

6 System poses optical hazard to

users from laser illumination

Eye damage is very unlikely to occur. Temporarily

blindness creating hazard is not clarified.

7 Available total volume too small No, volume OK

8 Housing too soft (pressure rating

goal is increased to > 300 m)

Tested at meters: 100 250

9 Active cooling hard to regulate No, cooling works well.

10 Too good thermal conductive

makes the system fail in cold

water (0-4°C)

No, Power dissipation at start is more than 100 W

and system heats up. Heat time reduced for System

Two.

11 Volume available for beam

expander optics too small

No.

12 Mounting of 2nd lens in beam

gives focused reflections

No.

13 More power needed

No, components can be power supplied through the

selected cable and connector.

Page 61: System integration prototype with functional test report · prototype with functional test report Lead beneficiary (partner): Karl H. Haugholt / STF Internally reviewed by (name/partner):

UTOFIA 633098

61 of 61

# Risk Description 1 2

14 Cable not available with suitable

cross sections and number of

wires

Cable and connector works fine. Cross section for

power cables was restricted due to connector and

cable stiffness. Voltage drop is 10 Volt through 70 m

cable. 36 V is supplied at the surface. A voltage

limiting circuit is included in the housing.

15 Too few pins in one connector No

16 Alternative image sensor ‘7 μm

pinned’ does fails to perform

adequate range-gating

It is possible that the alternative image sensor will

actually show worse or poor performance when

integrated within System One. Not tested

17 Class 3R system not possible

with full laser power

Diffusor solution works fine. Calculations shows a

safety margin of 40. Lacks formal classification.