“I have not failed. I’ve just found 10,000 ways that won’t work.”
― Thomas A. Edison
As I noted a couple posts ago, “wearable devices” are all the rage in Parkinson’s research. “Wearable devices” is a more wonkish term for what I would call a magic watch — a watch that can record (for example) steps, heartbeats, and other body parameters. I started out this post with a quote from Thomas Edison, because I think Tom may have had an easier time than the folks at Fox Insight. Tom just had to invent the light bulb. Researchers using wearable devices appear to be inventing the device as they go along, re-inventing the medical research model, and oh yes, also attempting to carry out some worthwhile medical research.
Fox Insight is the name for Michael J. Fox Foundation’s online study to learn more about the lived experience of Parkinson’s disease (PD). The Foundation’s hope is that this “data will offer researchers unprecedented insights into the intricacies of living with PD and may help the field uncover associations that could point to new therapeutic approaches.”
Read this sentence in quotes (from MJFF website) carefully. The approach here is upside down from the classic medical research model. Instead of having a hypothesis and collecting data to verify the hypothesis, this project collects whatever data they can massage the magic watch to collect and then “uncover associations” . (Hmmm….say….people over 6 feet have more tremors than people under 6 feet…so tallness raises the risk of PD…?)
This is not necessarily a bad research model, it just needs to be recognized as different, and handled appropriately. For instance, I’m afraid pretty much all the data I contributed to Fox Insight should be thrown out as invalid. This would be a disaster in a traditional medical trial with a small number of participants. But in this study with thousands of participants — eh, not so much.
Why do I think my data needs to be thrown out? The Fox Insight version I participated in “measured” three things: high activity, tremors, and sleep. Measuring tremors makes sense for PD, but the other two parameters have a tenuous connection with PD. This seemed more like “data we can collect” rather than “data that is relevant to collect”. The research value was further limited by the crude and inaccurate measurement. I never could figure out what “high activity” was — it wasn’t heart rate — and it appeared to better measure some kinds of activity (e.g., walking) than others (skiing, elliptical). My tremors are still very mild, but I know they have increased over the last 6 months….however, these tremors never showed up in the data.
So this data appears to be of dubious research value. Further invalidating the data are two other factors: 1) I frequently entered incorrectly when I took my medication because of a poor design — confusing in the first place, and then you can’t go back and correct your mistakes. Accurately recording medication is critical to reliably “uncovering associations”. 2) The magic watch does not stand alone, but relies on connectivity from watch to phone to MJFF server. This connectivity frequently would break, and it would take me a while to get to fixing it. So there were many gaps in the record. Eventually I could no longer make the connection at all, and after multiple attempts, I finally gave up on the watch.
I have not given up on the concept, and I intend to try out some future better-designed version. Indeed, after submitting four pages of feedback and recommendation, I was invited to be a beta tester for the next version. I had to turn them down because I’ve got a lot of other things going on right now, but I asked to be considered for future recruiting. One thing I learned is that an entirely passive test is much easier on me (just wear the watch), but to really have much research value, I need to perform active tests as well (e.g., balance tests, voice softness, reflex tests, etc.) (See post about an earlier project with active tests.)