While previous posts report on the science behind statistical running models, I have been working on the design of a prototype app that will bring the technology to runners in practice, and I thought Halloween would be a good time to post these rattling, running skeletons.
There is no shortage of running apps on the market. Mobile phones, running watches and designated gadgets can collect information about runners’ performance, calorie consumption, distance and height meters covered, and runners on all levels have embraced these technologies. So what is new here?
Well, current gadgets can collect positional data from GPS satellites, heart beats from skin contact, and accelerations and angular velocities of the body part to which they are attached, for instance the wrist. Forthcoming devices might also have the ability to collect blood oxygenization data via near infrared spectroscopy. Such data are really valuable for assessment of the running performance and load of the physiological system as a whole, but they say little about the actual loads on specific structures that are prone to injury, for instance the hip, knee and ankle joints, the Achilles tendon, other shin tendons, and the plantar fascia. For such structural data we need a detailed musculoskeletal model such as the AnyBody model shown below.
The trouble with detailed musculoskeletal models is that they require detailed input describing the runner and the running kinematics. Big data is going to solve that problem for us as described in previous posts in combination with the little data we can extract from the wearable gadgets and meta data for the runner. The scenario is that you can build and gradually refine a biomechanical model, as the one shown above, of your own physiognomy, and you can make it move with your movement pattern. Then you can simulate loads on your own joints and tendons and investigate different scenarios, such as: How will it affect my injury risk if I take shorter steps? Or: How would forefoot running affect my running economy?
Let us see how it can be done.
In the figure above, you see two fields with different colors. The reddish field on the left contains the processing of big data, which is done offline. There are two sources of big data:
- As many motion capture trials as we can get our hands on. Currently, we have about 180 trials in our database. The motion capture data are processed through the AnyBody Modeling System (AMS), returning joint angle histories and anatomical segment lengths.
- A database of externally measured anthropometrical dimensions, such as stature, leg length, span between fingertips and such. We have used the ANSUR data comprising about 130 parameters measured on 5000 American soldiers. These data are also processed with AMS to convert them to anatomical dimensions.
A somewhat complicated signal processing algorithm converts the periodic joint angle histories for each trial to Fourier series. We can now describe each running trial by roughly 1200 Fourier coefficients, anatomical parameters and meta data, i.e. we have linked body dimensions, gender and such with running styles.
The little data processing is in the green box to the right. We collect little data specific to a particular subject, such as external anatomical dimensions, gender, running speed, step length and step frequency. Perhaps we also have data available from wearable devices.
Given the little data, an optimization algorithm tunes the parameters of the running model to fit the little data as well as possible. This generates a running pattern corresponding to the little data and relying on big data for any information that we are lacking about the subject in question. I will not go into the mathematics of that process here, but it is very efficient and runs interactively on the user’s device. Once the model for the user is complete, it can be submitted to biomechanical analysis – again with AMS – to compute the biomechanical parameters we were looking for.
Let us see how this could work. The prototype app was developed with Python and the cross platform tool kivy, which should ensure that it runs on almost any conceivable platform including mobile devices. I run it on my laptop, though, and have not tested it on other platforms yet.
When I open the app for the first time (or reset the model) it presents me with the average runner that you can see below. This imaginary person is 45% man, 55% woman, stands 1.69 m tall, weighs 69 kg and runs 12.4 km/h. Everything here – body dimensions and movement patterns – are generated artificially.
I click the button “My Body” and start inputting my own data. I am a big guy, so I pull the slider to the male side and insert my body mass of 88 kg and stature of 1.91 m.
Although the “little data” are very little at the moment, i.e. just two numbers, all data necessary to generate the model are available, because everything that I did not specify is predicted from the big data. The anthropometric predictions are very simple to check. I can measure the span between my fingertips. The algorithm has predicted it to be 1.97 m, but I can in fact only span 1.93 m, which I type in. Likewise, I measure and register my overhead reach to 2.38 m, and the height to the widest point on my upper thigh, i.e. trochanter major, to 0.99 m. With these partial measurements, I end up with the following table, which ´could be further improved if I took the trouble of measuring all the parameters.
I return from this frame to the main screen, click the “My Run” button, and I get this:
I could type my running speed and other key parameters that I happen to know about my running directly in. But I could also import some data that I have collected with a running watch. Such a device has a GPS, which knows my running speed and it also contains an IMU that registers my step frequency and accelerations of my wrist. I therefore click “Import gadget data” to browse files with running watch data that I have stored. I load a file containing data for a 20 km/h running pattern and get this:
I am now ready to “Recompute running style”. It takes a couple of minutes for the musculoskeletal analysis to process the model. The result is the following rendition of my skeleton:
The app also informs me that my running economy is 829 J/(km kg) at this running speed. This may not seem very revolutionary given the fact that the running watch can measure my pulse directly, and this will correlate closely with metabolism, i.e. I have not yet computed anything that we could not measure easily. However, the model also computes forces in all muscles and joints, which is something that a running watch will never provide. I will return to this subject in a forthcoming post.
But the more interesting feature is actually that I can investigate things that have not happened yet. For instance, what would be the consequence if reduced my running speed to 11 km/h (which is much more realistic in my current form)? Well, I can just go ahead and change the speed in the model and I get this:
This also improves my running economy to 656 J/(kg km). It is not surprising that a comfortable jog is less strenuous than the fast 20 km/h dash.
The “what if” scenarios will later be placed under the “My Goals” button, where we plan to provide tools that can help the individual runner change running style to reduce loads or improve running economy.
Stay tuned for updates on this in a future blog post…