Sunday, November 14, 2010

PCM Spectrum Analyser and Octal FSK Decoder / Encoder

This was a prototyping project to see if I could capture PCM (pulse code modulated) data from my sound card - where the sound was encoded using an octal FSK (frequency shift keyed) method and then decode it. The application displays the captured waveform (amplitude / time domain), a frequency /time domain plot and frequency components using a fast fourier transformt (FFT) of a sampled portion of the original waveform. It also has a waterfall type frequency display which is generated from iterative FFTs over the complete waveform. Not only can the application capture and decode the PCM data - but it can also use an inverse fourier transform (fourier synthesis) to encode data to PCM. It even includes a method to synthesize noise on the generated PCM waveform. The intention is that this prototype will form the basis of an octal FSK encoder / decoder service that allows multiple client applications on a network to send and receive data over a radio channel. Developed in C# making use of DirectX DirectSound and the Exocortex.DSP (http://www.exocortex.org/dsp/) digital signal processing engine.

Saturday, November 13, 2010

Software Defined Radio and PSK31 Decode using Digipan

I recently purchased a WinRadio Excalibur Software Defined Radio (SDR). This very cool radio can scan and analyse radio signals across the full 0.00 - 50.00 MHz range - and therefore makes great wide spectrum analyser! It supports AM, AMS, FM, LSB/USB (SSB), CW, DRM, FSK and UDM modulation.

For a while I listened to amateur bands (particularly 20 and 15m) and recorded "QSOs" in my own QSO logging software (which I will feature on this blog at some stage soon) - however I quickly became interested in decoding digital modes in the amateur bands (i.e. PSK31 etc).

The screen capture above shows the SDR recieving PSK31 and decoding using a free software application called Digipan. To set this up I also had to download VAC (Virtual Audio Cable) to route the audio to digipan for decoding.

Saturday, October 16, 2010

CueSo (Advanced Amateur Radio QSO Logging Software)

After the purchase of my WinRadio Software Defined Radio (SDR), I started listening to a lot of radio amateur traffic on the 20 and 15m bands. I use a very compact active antenna which is only about 40cm long mounted vertically off the side of my apartment balcony here in London. I was interested to see what sort of range and performance I could get using this setup (which is less than desirable) and wanted to be able to plot these on a map and perhaps do some basic spatial analysis on received signal strengths etc.

Obviously my first stop was good old Google maps to plot out the locations of the stations I was recieving. QRZ.com is a great site that lets you get the QTH (location) of stations given their call sign - and this is what I used to determine where the stations were. Unfortunately Google maps gives misleading results if you are interested in the location of stations relative to your own. This is because Google maps uses an innaproproate map projection to display locations given in geographical coordinates (Latitude and Longitude). Azimuth and distance are distorted using the Mercator projection under Google maps. A polar projection centered at the receiving station is more appropriate for this type of map display.

So my second stop was to load up the data in ArcGIS (Geographic Information System) and choose a polar coordinate system centered at my appartment. I was intending to put some sort of nice interface on top of ArcEngine to allow me to add and query the logged stations more easily - however I found ArcMap so slow that I decided it would be better just to write my own display engine and implement the QSO logging software from scratch. This would also give me more creative control over what I wanted to do.

CueSo is the result of this software development effort (developed in C#):

Fig 1: Screen Capture of CueSo in Map Display and Spatial Analysis Mode.

The software allows you to:
  • Leverage the QRZ.com database to quickly search for stations given call sign (CueSo consumes an XML based service provided by QRZ.com).
  • Data entry utilises as much auto-complete and look-up intelligence as possible. This is to simplify and speed up an otherwise tedious task.
  • Recorded and log QSOs based on the ADIF standard fields (Amateur Data Interchange Format).
  • Query the QSO database using SQL (standard query language) - allows for complex queries to be performed.
  • Display QSOs in a polar map display. Map display options are configurable.
  • Choose from a number of geodetic engines to perform the map projections (Half-Versed-Sine (haversine), Spherical Law of Cosines, Vincenty, Redfearn's Formula).
  • Maidenhead locators (grid squares) are calculated on the fly and are displayed along with geographical coordinates.
  • Perform spatial analysis on received signal strengths using an anti-aliasing kernel over the QSOs to generate a raster surface which can be normalised by distance to station and radio path density. The raster engine is my own "RasterSurface" engine which I will feature in this blog at some stage in the future).
  • Make simple graphs from the QSO database (countries contacted, contacts over time, frequency utilisation etc.).
  • Pull space weather information (important for radio propagation conditions) from the NOAA Space Weather web service.
  • Make audio recordings of the QSOs and play them back at a later date. I did this using the DirectX Direct Sound libraries - but have also included an option to use low-level WAV capture using code from Ianier Munoz (avoiding the need for DirectX).
Screen Captures:
Fig 2: Logging on to QRZ.com Services.

Fig 3: CueSo displaying QSOs in tabular form with an SQL query applied. Left panel is the data entry control.
Fig 3.5: Perform advanced SQL queries using the query builder.

Fig 4: CueSo's map display and various display options. You can change colours, fonts, layer displays etc. And it's much faster than ArcGIS!

Fig 5: CueSo's GIS-like functionality showing Identify (point-and-click) and Search results.

Fig 6: Spatial analysis of signal strength received normalised by distance to station.

Fig 7: Graphing options (Countries contacted in this case). Also shows the list of available graphs that can be generated.

Fig 8: 3 day satellite environment plot from NOAA showing electron / proton flux, magnetometer and estimated planetary K values.

Fig 9: Latest auroral activity plot for the north pole (from NOAA).

Fig 10: Showing audio play back from a recorded QSO.

I want to add wave propagation modelling to the software at some stage in the future. The modeling will utilise the ITS Irregular Terrain Model algorithm for direct wave propagation - but I also hope to include ionospheric and satellite propagation modeling modules too.

Friday, April 2, 2010

Modelling Geoidal Undulation and Reducing Ellipsoidal Elevations

I was once faced with difficult geo-referencing problem. We had just acquired a rather extensive high resolution aerial photography and LIDAR campaign in an area that only had datum transform parameters that were acurate to 5m and no adequate geoidal model that could be used for reduction of ellipsoidal elevations to local. All data was delivered referenced WGS84 (ITRF 2008) and LIDAR data was in processed, ungridded, point cloud form (60 billion points). We needed:



  1. A model for vertical reduction from ellipsoid to local.

  2. An automated means of applying the vertical corrections.

  3. An automated means of gridding the LIDAR data.

  4. An automated means of re-projecting and applying higher precision horizontal transformations.

I determined the ellipsoid separation model by obtaining information from survey control stations where the local stations had also been occupied by GPS and those observations were available. These were few and far between and an even distribution across the survey area was highly desirable. In the end I managed to find 9 suitable primary control points and 4 secondary points that could be used for densifying the model.


Fig 1: Shows primary control points (large red cross) and 4 additional secondary control points used for densification purposes from benchmarks of various topographic surveys (smaller red cross).


A polynomial regression surface was fitted to the separation values from the control points using ArcGIS Geostatistical Analyst software. The regression fit was also measured. Fig 2: Local Polynomial Regression Parameters in Geostatistical Analyst.



Fig 3: Geostatistical Analyst showing the regression function and fit to the input control data. The model fit had an average error of 6mm.


The regression surface was output at a suitably high resolution to a raster dataset. To automate the reduction and gridding of the point cloud data I turned to my trusty Raster Engine which I modified so that the binning function could also accept a "correction" surface. I also added an interpolation method so that no "stepping" could be observed at the boundaries of the regression surface's cells on the final gridded LIDAR data. The LIDAR data had been delivered as several hundred tiles (for ease of data handling). The Raster Engine processed all tiles (60 billion points) with binning and vertical reduction in a matter of hours (without parallel processing).Fig 4: 3D Visualisation of 3 processed LIDAR tiles in Priority Area 2. No “stepping” effects can be seen even under a vertical exaggeration of 25x – again validating the success of the sub-pixel interpolation of the separation grid.

A number of secondary survey points (34) were reserved (i.e. not part of the regression model input) as a means of checking the validity of the model and the resultant reduction. During comparison these showed an average error of 2cm.

Improving the horizontal accuracy was achieved by comparison of the reprojected high resolution aerial photography with surveyed as-built features. A conformal transformation in grid space was derived for the entire survey area. Reprojection and application of the conformal transformation was automated using the ESRI re-projection engine. The entire dataset was processed in one weekend on a single workstation.

Monday, September 21, 2009

Correction of Airborne Laser Scanner Data

In my current job I work closely with a team of engineers who are quite interested in all things related to ice. They want to understand how ice movements in a shallow sea interact with man-made structures such as pipelines, platforms etc.

One of the things that they are interested in measuring is the shape and size of certain ice features (like ice-bergs, grounded ice piles, rubble fields etc). They want this information to estimate volumes and measure the critical angle of ice rubble pile-ups.

We've used various techniques to acquire this type of information; everything from analysis of optical and synthetic aperture radar satellite imagery to terrestrial laser scanning and even laser scanning from helicopters.

During one campaign, laser scanner data was acquired from helicopter but no motion / attitude sensor was available to correct the data for pitch, roll or yaw. A non-survey grade GPS was used to record the position of the helicopter and heading could be interpolated as "course made good" from two position fixes - so really the spatial referencing wasn't very good.

My colleagues wanted to measure the angles of the ice rubble piles from this data - but I warned them that the data would be misleading because of the lack of atitude correction. Extreme angles might be measured that are influenced by a large roll event for example.

After looking at the data I decided that it would be possible to automate the correction of the data even without information from a motion reference unit. This was possible because each scan line in the data also imaged portions of "flat"ice / sea surface and this could be used to determine a "normal"flat surface. If the approximate elevation of the sensor is known then roll correction is possible.

I realised also that to a certain extent a semi-automated correction for yaw could be performed by picking correction points on the laser scanner dataset.

I built a quick tool to perform the automated analysis and correct the dataset. It also allowed the data to be geo-coded using the GPS data. A screen capture is included at the top of this post.

Monday, September 22, 2008

Extraction of Coral Features from MBES data using GRASS

In 2006 I was faced with a problem. Where I worked at the time we were acquiring large volumes of seabed information in the form of discrete multi-beam echo sounder (MBES) data for bathymetry and side scanning sonar to identify seabed features. Our vector database of classified coral outlines had not been updated based on these datasets for a few years and it was a big task to re-interpret the lot.

We were not recording backsctter intensity from the MBES and SSS (side scanning sonar) has some inherently funky geometry which makes it hard to work with and integrtate with MBES for automated feature extraction purposes (although I did some interesting work with this in 1999/2000 that I may post if I find the results). I had some previous experience with terrain analysis and thought I could lend a hand.

I had done quite a bit of work in the past using 2nd differentials of terrain (more commonly known as rugosity). If the first differential of a terrain dataset is slope - or rate of change of the vertical component over a horizontal surface (this is essentially the vertical variability or undulation of the terrain), then the second differential is "slope slope" or the rate of change of the rate of change of the vertical over the horizontal. This is essentially a measure of how rugged a surface is. The theory is that coral tends to exhibit higher rugosity than the surrounding seabed.

Another technique to find the edges of the coral is to perform "inflection" analysis. In this case you take a smoothed lower resolution version of the terrain (spline fitting is often well suited to this) and intersect it with a higher resolution model. The point at which the absolute differences between the two models is zero is often a pretty good approximation of the edge of the Coral. This method sometimes seems to work better than using the 1st differential (high slope) alone.

A combination of slope, rugosity, inflection analysis and various "grow" and "thin" filters eventually results in a fairly accurate extraction of the coral areas - although a bit of tuning is required depending on the the resolution and general properties of the input terrain model .

I had worked on similar problems in the past using custom filters and kernels in ER Mapper and also using ESRI technology in Spatial Analyst. As I was mucking around with GRASS under Linux at the time I decided to use this environment instead. I was extremely impressed! I found it extremely powerful that I could write bash scripts to automate and tie together many processes.

After successful extraction of coral features in GRASS we ran vectorisation of the results in Spatial Analyst, performed some sinuosity filters to remove jagged effects and then updated the coral outlines data in GIS.

The screen capture below shows the results against a dataset over varying quality. It should be noted that thus was taken part way through the extraction process and does not represent the final classification. Unfortunately I could not find a screen cap of what was finally extracted nor could I find the bash scripts we used in GRASS.... I'll keep looking.

Sunday, September 21, 2008

Linux, The Birth of the Raster Engine and Seismic Matching Algorithms

In 2008 I was getting re-acquainted with *nix systems all over again. My primary computer at home was running Ubuntu Linux and OpenSolaris (UNIX) and I was getting into all the great open source software that's out there, for me that meant:
  • C++ / Java on Eclipse with K Develop IDEs
  • NET programming on Mono
  • MySQL database.PHP web programming
  • ... and of course the mighty GRASS GIS (which is one of my favourite serious GISs).
I even converted my entire beowulf cluster (of only 4 nodes) from Windows Server 2003 to a Ubuntu variant (Mint).

I've always been the type of person who takes work home - and I usually work a few hours in the evenings and on the weekend just tidying things up. I guess it's OK when you're excited about what you're doing and / or you don't have a baby to look after (as my wife and I do now :) ). I don't feel right if I have not laboured over an interesting problem until I am completely knackered or the accompanying glass/es of wine haven't made me give up.

It was during this time that for fun I was writing my own raster engine in C++ - the idea was that I would get used to developing for Linux and also dust the cobwebs off my C++ skills / remember how to use pointers! The engine could read in and out ASCII XYZ / CSVs, ESRI ASCII and ZYCOR ASCII formats as well as address cells by coordinate etc - the usual stuff you would need from a basic raster manipulation library. I also hoped that by targeting Linux I could later use this engine for batch or parallel processing on a cluster of 64bit machines - massively increasing the amount of memory I could address and therefore the size of rasters I could manipulate. I compiled on my Ubuntu OS using g++ (GCC) and during performance testing the engine easily beat the ESRI engine and even the GRASS engine (by a factor exceeding 100).

Fortunately a work problem presented itself for which my newly created raster engine would be perfectly suited...

The geophysicist in our group had noticed that some old seismic data was not correctly positioned (grossly). Throughout the datasets he was looking at it seemed that a number of lines did not match when he compared the seabed picks with real bathymetric data (where we were working at the time this was a fairly reliable method of checking because the seabed had a very distinct profile due to the presence of corals). He asked me if there was an automated way to perform this comparison and if it were possible to even determine what the correct translation and rotation of the seismic data might be.

I quickly realised that my C++ raster engine could be of use in this case. It was perfect at loading up very large surfaces and could easily generate a profile given a series of coordinates. It was also possible to compare a given XYZ profile with one generated from my engine to determine if the two profiles were somehow similar. I also formulated a method whereby iterative coordinate translations and rotations were performed on the given profile until a closer match was found with the seabed profile. It took just over half an hour to run all permutations for a given profile.

Fig 1: On top is the seabed profile generated from the raster engine (over a nice piece of coral) and below the absolute vertical component residuals from the comparison with the input profile from seismic data. A negative match is obvious.

It was at this point that I knew that if we were to process this across the entire seismic dataset then we would need to lease time on the dedicated batch processing compute cluster.
Where I worked we had 2 dedicated compute clusters and one was based on Red Hat Enterprise Linux (RHEL) (I'm not so sure what the oher ran but I think it is also now RHEL-based). I used RH (before they went enterprise) when I was at university so it wasn't too dificult to move to KDE from the Ubuntu GNOME environment. We recompiled the raster engine for RHEL and scheduled the jobs for processing on the cluster.

There's no flashy graphics in this post - but almost over-night we managed to process all of the seismic lines and determine at least which ones had dodgy positioning. It turned out that the translation / rotation matching algorithm was only mildly successful (I'm sure given time I could improve it) - we had less than 20% success finding matches using the approach adopted but in most cases the determined match made sense.