Project Tide Rising

If you’d have asked me in June 2015 if we’d have units being tested in Malawi and South Africa by April 2016 I would have said, “of course. They’ll have been there for a few months by then.”

Past me was an optimistic fool.

I am still just as foolish, thinking things will be far simpler than they turn out to be, but now I know the difficulties of making an Android app.

The main thing I would change is panicking less. Sending the project into the wide world feels like getting off a roller coaster.

In the Beginning

In June 2015 we had a web page that could sometimes find the TB LAM strip, and then sometimes detect if the patient area had a line in it. That was good enough for us to pique BGV’s interest and get some investment. It also gave us time in a start up incubator, which provided me with valuable insights about how startups work.

Development progressed at a rate, during our three months with BGV and our cohort. It felt like every day I would come up with a new and ingenious way to not detect the patient’s result.

Algorithm’s progress

Our first milestone was getting the stuff in the web page onto a phone. At first we thought we’d use Cordova, because then we could just carry on working on a web page and be able to ship to various phone operating systems, but Cordova doesn’t support directly accessing the camera. So we went direct to Android development in Java. This was advantageous because the smart phone we found to be most often used in resource poor settings is also an Android.

Once I had set up the app in Android I thought I would try out OpenCV to see what it could see. It didn’t see anything. I went back to the original app.

In its original state the app would try to find the strip by locating the cyan parts of the strip in its centre. From there the gaps between the cyan chunks indicate the patient and control areas. This had the same success rate as the web page, which wasn’t good enough. So we decided to add a mask to the image preview, so the use could line up the strip and the algorithm wouldn’t have to figure out where the strip was.

The first stumbling block I encountered, which took a while to figure out, was that the preview and the final image aren’t the same dimensions. The final image cuts off a lot of useful stuff contained in the preview. Getting passed this meant using the preview as the final image. Not ideal, as the resolution is lower, but acceptable.

The second stumbling block is that holding my hand still, while holding a phone, is difficult, which means that keeping the mask over the strip is difficult, so getting the patient and control data is haphazard.
While we thought about the implications of this, I tried a new idea I had for finding the strip in the image. Each strip has a large area of blue. I cut up the image into 36 chunks and found the chunk that was most blue. This worked really well (except on blue backgrounds), but there was a flaw in the plan: there is no way to tell the orientation of the strip.

Discovering this we decided to go back to the mask method with the addition of a stand that would keep the phone and the strip stationary in relation to one another.

Taking a Stand

Our stand was designed by a very clever man by the name of Nathan Bentall. It holds the phone opposite the strip in the horizontal plane, rather than the vertical, so that the phone doesn’t cast an annoying shadow on the strip.

From this I developed a method for assessing the quality of the patient and control areas. I took the standard deviation of the Y position of the darkest pixels. If the deviation is small then I graded the strip as positive for the LAM reaction. This worked really well for the strong reactions, and for no reaction, but for the weak reactions it was kind of random.

We attacked this problem in three ways.

Firstly, out of concern that “no reaction” strips could start showing up as positive, as the darkest pixels of a blank strip could coincidentally line up, I started testing the average darkness of the left, middle and right thirds of the patient and control areas, because the middle portion is where the reaction would take place and thus would be the darkest.

Secondly, to enhance the strength of the signal, I started taking multiple photos of the strip.

Thirdly the flash was enabled, so that the image was less affected by ambient light. This changed required a sizeable modification to the stand, in the shape of a translucent lens in front of the flash, to prevent glare and over saturation. This was again engineered by Nathan.

These things combined allow us to reliably detect the difference between no reaction and some reaction, even at the lower levels.

Finally

By this point this point the trial of the LAM strip was under way in Malawi, so we gave a phone and stand to our Clinical Lead, Dr. Ankur Gupta-Wright, who has taken it away for testing in the Real World™.

We have since then also sent a unit to a hospital in South Africa, to be tested by Dr. Amy Ward.

The prospect of user feedback gives me chills. Preliminary results suggest there is more work to be done, but that’s another story.