Friday, February 24, 2017


Here just the critical links to this saga. Thanks to my friend MJ for bringing this to my attention! [image credit: Google]

The Cloudbleed Story:

Article by Thomas Fox-Brewster of Forbes:

...which refers to the finder of the bug, Tavis Ormandy, and this blog post:

And Cloudflare's announcement, which includes many technical details worth reading:
... which lays out the root cause (sigh)
/* generated code */
if ( ++p == pe )
    goto _test_eof;
The root cause of the bug was that reaching the end of a buffer was checked using the equality operator and a pointer was able to step past the end of the buffer. This is known as a buffer overrun. Had the check been done using >= instead of == jumping over the buffer end would have been caught.


What is Google Project Zero? And why does it exist? Pretty fascinating, actually:
Project Zero is the name of a team of security analysts employed by Google tasked with finding zero-day vulnerabilities. It was announced on 15 July 2014. [wikipedia]
The Wikipedia summary is definitely worth a read.

And the Project Zero blog:

#cloudbleed #cloudflare #GoogleProjectZero 

Tuesday, February 21, 2017

An impressive-looking Apple ID phishing page

Just now I received this email. It's a classic example of phishing.
There is something clearly wrong here...
... this is not likely a valid Apple email address: 
Apple <>
I moused-over the "" hyperlink, which actually goes to 
This is clearly not in the apple domain.
Curious, I clicked on that and wound up at this authentic-looking web page at

It's one of the more clever phishing attacks I've seen recently.
Looks pretty real, doesn't it? That's because it is a complete copy of the real Apple page:

In case you wonder, the real page looks like this:

Now I'm going to report this as a phishing page at

This exact attack has been around at other addresses, for example here.

Sunday, April 10, 2016

Your code as a Crime Scene

Code, Crime, Complexity: Analyzing software with forensic psychology | Adam Tornhill | TEDxTrondheim

A video worth watching.

"If we want to improve software development we should optimize for ease of understanding"

If only that were standard practice a big part of my life would be SO MUCH EASIER. Right now I am battling multiple open source libraries which have bugs which cause them to fail fairly repeatably. You will be shocked to hear that they are very poorly documented and hard to understand. 

When will programmers stop writing obtuse code (apparently intended to impress others with its difficulty and complexity) and start writing code which is easy to understand? Why is there such resistance to this?


Saturday, April 25, 2015

Unit Test of Embedded System Code

Manifesto: Unit Test of Embedded System Code

I'm not sure eactly what this means, but I intend to apply it, starting now.
And why not give my efforts a grandiose name?

Having just spent an interesting Saturday in a Code Retreat graciously hosted by OC Tanner, including fabuluous food, I hereby resolve to start applying some of what I learned, even though I am not at all certain what that means. Why let ignorance stand in the way of application? Here are my intended actions. To anyone who knows what they are doing these ideas may sound infantile, but so be it. Has anyone else already figured this all out? A minute of Googling the title of this post does produce some hits such as this paper from Parasoft.

What does it mean to test embedded code?

I'm envious of programmers who can live in the perfect world where their universe is just software: PC and web apps, databases, transaction processing, etc. Sure there is hardware there executing code and storing data, but you never have to touch it or debug it. Your PC either works or it doesn't. You have powerful debuggers and analyzers and never need to touch a multimeter or oscilloscope.

Embedded systems are a lot messier. For example I am working on a driver for the AD7794, a 24-bit delta-sigma analog to digital converter, with PGA and a bunch of other programmable features. It works up to 125 C, too!  Good news: it does a lot and the options are useful. Bad news: it is really complex and so the driver has to manage that complexity. It uses a rather unusual (in my experience anyway) SPI interface which does not synchronize with assertion of the slave select. My first problem was that I could not find any good examples of general purpose C code, so I started writing my own from scratch. The simplest reading of a device status register seemed to return nonsensical (but not random) data. How do I debug this? The failure might be in the part or in some hardware-access routine.

Then, another system uses a confusing series of shift registers to drive AC and DC loads in a system under control. How do I really test my code without also testing the outputs of these shift registers, for which I need special hardware? So in fact we are spinning a quick-turn circuit board to display the state of all system outputs and provide a means to test system inputs. It's not fully closed-loop and therefore capable of fully automated functional test, but it's better than the current state of affairs where we can't even tell if we are driving most of the loads (you can't see a low-wattage heat pad turn on).  This board will let us visually watch ones and zeros being walked across outputs both low-voltage DC and 120 VAC.

Testing once is not enough

Maybe in the perfect world of software it is, but not in the embedded space. When you go home at night why not leave test code running with some way of logging errors? You might be shocked to find that there is some failure 0.00001% of the time. If you can run a million or more tests overnight you might be lucky and see 10 such failures. This is indeed lucky: better to find and fix it now early on than have customers later report mysterious failures in the field.

Consider functional test hardware as part of the original system design

What good is a system which can't be functionally tested? Usually this means special custom hardware (it can be simple, but it is still custom) just for the purpose of functional test. So consider that as part of the original system design and include it in the budget and schedule. If it's a design for someone else, xplain to your customer the benefits of your doing so.

Add a C++ test class to every device driver

I'm working on several C/C++ device drivers for prototypes of commercial products. They are really C with just the thinnest layer of C++. I normally write tests as I go but don't save all of them. Many times they end up as chunks of commented-out code (a bad pattern/habit I'm working on correcting), or old versions of the driver which get discarded or lost in an archive. Instead, why not write a deliberate test class which can then be invoked from a simple driver test program and run throughout the product's life, as needed? This way the tests stay as part of the project but don't have to get built into the shipping binary. It seems like this should work. I'll try it straight off and let you know. Maybe everyone else in the world already does this and I'm the last to adopt it.

Mock up data to test algorithms

What I mean here, is that if there is some data processing needed to convert from a sensor's native format (e.g, TMP102 stores only negative temperatures in a 13-bit two's complement requiring two separate byte reads) to a useful one (such as degrees C or F), it is a good idea to pass simulated data of all possible values to that routine so that you know it works and the converted temperature values will be correct. I've seen open source drivers that just didn't bother with this and ignore the special conversion needed for negative temperatures. This is especially true if the algorithm has other dependencies such as using calibration coefficients pulled from device registers or calibration memory. What if those calibration values are wrong, or span all possible values for the given data type: will a calculation overflow or wrap around? This overlaps into boundary testing.

Test all boundaries

If a 12-bit ADC returns a value into a 16-bit data type, and it is right-justified, the top nibble should always be zero filled. But what if it isn't: either the converter fails, or it is not initialized properly, or noise infects the data lines? Will downstream calculations or conversions fail? Test all such routines with the maximum possible data values. Clip values to the maximum permissible in your code if that makes sense. For example in a PID algorithm, there is the danger of "integral windup" but this can be handled by limiting the range of your integral variable.

Catch all exceptions and at least report a unique code

Sometimes if there is a possible, but inconceivable, error state I put a message such as "this should never happen: error XXX" where XXX is a unique integer so every error is distinct. Imagine my suprise when I see this very error message later in system test. Clearly something big has not met my expectations, but at least a) I know it happened and b) I have a clue where to look in the code.

Use a C/C++ documentation tool

Documentation was never mentioned in the code retreat. Apparently it is not even taught in most CS and CE curricula. What's wrong with this picture? That's a topic for another day. One thing I like and miss about Java is javadoc: it is baked right in to the tools so there's no valid reason to not use it. With C/C++ you have to take extra steps to even find and install a tool. I'll try doxygen first.

Wednesday, February 18, 2015

Crawling up the Linux and Python Learning Curves

Why I'm writing about this:

  1. It helps me think it through on a deeper level and understand it better if I can explain it in complete sentences.
  2. My experience might be useful to others. The blogs I have read about other setup experiences have been very valuable.
  3. It's my first attempt to give something back to the open Linux community, which has made possible an amazing collection of software and resources.
  4. So I don't forget what I have done, and why.
  5. To share with attempts to start a local Python For Kids programming group, starting with my almost-13 year old son.

Over a year ago I set the goal to learn Linux and Python. Why? Because Linux servers pretty much run the world. Even little off the shelf NAS systems such as those by Synology (I use them) are running Linux. The more-or-less preferred Linux admin language is Python, so it ships with all standard Linux distros, even small ones like on Raspberry Pi and Beaglebone Black.

Plus I am in the midst of a commercial project using embedded Linux on a Beaglebone Black, and at the core of it, the Linux on it is pretty much the same as other Linux.

For over a year now, I've been running Ubuntu (now 14.10) on a Lenovo x131e netbook, with a Core i3, 8 GB RAM, 320 GB HDD, 1366x768 TFT screen. It is actually dual-booted with Windows 7 Pro but it has been months since I've booted into Windows. I need to do that and catch up on Windows updates. But I digress. This is the system I take with me every day when I teach at Salt Lake Community College, so I use it to log into the wireless network there. I use Libre Office to edit class materials. I use Chrome and FF to acces the Canvas instructional framework. So far, I have found it as easy to use as Windows 7 Pro. Some things are actually better: battery life seems a little longer, connection to Bluetooth mice (Logitech and MS Sculpt) is more reliable. Boot times are shorter. Ubuntu 14.10 seems a little less stable than 14.04 LTS: occaisonal lock-ups, and sometimes I start an application (e.g. Chrome) but it can't create a window in the display even though it's running (htop or System Load Indicator show it active). But overall, at least as stable as Win 7 Pro.

I travel around locally a lot and find it easier to connect to numerous WiFi points in Ubuntu. Also I tether to Samsung Relay 4G phone, and this is painless: the phone appears as a wired Ethernet connection to Ubuntu, and data speeds are at least 2X higher on average than using the phone as a WiFi hotspot.

So, spurred on by this success, I have built up an AMD quad-core A8-5600K desktop with a 120 GB SSD, 1 TB HDD, 8 GB RAM, etc. It also dual boots Windows 7 Pro and Ubuntu (now 14.10). It's now my second desktop system.

Then a few weeks ago I purchased a Lenovo Yoga 11e netbook with quad-core Intel 2930 CPU, 4 GB RAM, 128 GB SSD, and 1366x768 IPS multi-touch screen. It came with Windows 8.1 Pro but I have pulled that SSD and replaced it with a Sandisjk 128 GB and loaded Linux only on it. I'm typing this on that system, running Ubuntu Mate 14.10. An MS Sculpt BT mouse is connecting very reliably. But the touchscreen doesn't work at all, and the trackpad is so-so. But the screen is great: bright, wide viewing angles, good color, but glossy and shows all fingerprints. Also suspend doesn't seem to work correctly at the moment. I'd like to get all these things sorted out.

I'm making some Google docs as I go and will post view-only links to those here.

About me:

  1. Not a windows-hater. I like Windows 7 Pro for the most part and use it daily for my work. Many programs I need to use (Altium Designer for one) are only Windows. USB on Windows is a nightmare: I use many development boards with USB interface and they all seem to assume they will be the only USB device installed on my computer. Wrong. Too often, clashing ensues. It can be a real headache, requiring uninstalling and rebooting, ad nauseum. Still, USB in Windows 7 is a lot better than under XP.
  2. Not in love with Windows 8.1 since I don't want my PC to look like a tablet or phone, and I don't want to connect with the MS cloud. I don't spend all day on social media or gaming. I don't want a dumbed-down "friendly" screen of tiles: I was fine with the Windows 7 GUI. Is USB a lot better under Windows 8.1? I don't know.
  3. I'm an electrical engineer with background in hardware design of mixed-signal (that means digital and analog) systems, including embedded controllers.
  4. Embedded Java enthusiast, though sadly in the US that market has pretty much dissipated due to neglect and stupid licensing by Sun ($100K to make Sunspots commercially? Really? No wonder there were zero takers). My company, Systronix, produced TStik, JStamp, JStik, SaJe, and other (at the time) ground-breaking embedded Java systems.
More to come.

Monday, February 16, 2015

Wired Ethernet at Home: Still a Great Idea

Wired gigabit Ethernet is going into our home remodel and I am sooo glad we did that. I will detail how I did it, including setting up dual digital TV tuners, a media server, DVR, HTPC, large-screen projection system, dual-band WiFi, VOIP phone, better-than-average WiFi security, and most of a 1000-foot roll of Cat5e. I've tried to do it all relatively cheaply too.

Why is (wired) Ethernet better than WiFi?

  1. More secure, if you care about that. Much harder for someone to sniff your wired Ethernet.
  2. Much better throughput. 1000BaseT is over 4X faster than 2.4 GHz WiFi and 2X faster than 5 GHz.
  3. No channel congestion. Use a sniffer (e.g. Fing or WiFi Analyzer) and it's astonishing how clogged 2.4 GHz is. No one else is on your wired Ethernet but you. 
  4. Easier management of guests: they just plug in. No need to enter their MAC addresses into a table of allowed WiFi clients.
  5. Wire is cheap: 1000 feet of Belden UTP Cat 5e is $136 online. Jacks and other hardware are reasonable too. You do have to wire up the jacks and connect to switches. But this is easier than you might think. If I can do it, so can you.

    Why is WiFi better than wired?
    1. Those damn wires. It is a bit of a hassle plugging in. WiFi is so easy. But like too often eating fast food, easy does not mean best or even good.
    2. Those damn wires: retrofitting into an existing building can be a hassle. In my case, we are tearing out most of the walls anyway so we have a golden opportunity.
    3. Phones, tablets, and even some netbooks don't have an RJ45 jack, so they have not choice but WiFi.
    4. No switches, jack, routers to deal with. Well, at least fewer than with WiFi.
    More to come!

    Tuesday, January 27, 2015

    Fuel Surcharge???

    A quotation received today for shipping some heavy countertop material by truck has a 17% "fuel surcharge". Huh? Diesel is at its lowest price since 2010. This is a 26% decrease from a year ago. If the trucking industry wants our sympathy when fuel prices are high it needs to reciprocate when prices are low.