Statistical Running

I have recently taken an interest in statistical models of various sorts, where the statistics can be in terms of model anthropometry or in terms of the tasks performed by the model. Two very gifted students of mine, Brian Vendelbo Kloster and Kristoffer Iversen, developed a statistical model of running. Before I write about that model, I would like to give you some background for the whole idea.

CAE

I perceive musculoskeletal modelling as a Computer-Aided Engineering (CAE) tool. More prominent representatives of CAE technology are finite element analysis for solid mechanics or heat transmission, or finite volume models of flow problems. These technologies are used to create virtual models of products before the products exist, and they have enabled much of the technological progress we have seen in recent decades. Mobile phones, modern cars and planes, and large wind turbines are just a few examples of products that would not exist in the absence of CAE.

So, what’s wrong with musculoskeletal modelling? Only the fact that most of our models require experimental input, usually in the form of motion capture data. If I want to model a complex motion like a baseball pitch or ingress into a car, then I need to feed measured motion data into my model to make it move realistically.

We have good interfaces for importing mocap data, so what’s the problem?

Well, if the model needs experimental input, then it can really only model something that happened in the past, i.e. the experiment. And the whole point of CAE tools is to create models of virtual products or situations, i.e. things that have not necessarily happened yet. It is fair to say that models are only real CAE models if they can predict the future.

ADL models

This is where a new class of musculoskeletal models comes into the picture. These models are called ADL Models, where ADL means Activity of Daily Living and can be any recognizable movement or working task that humans perform. The idea is that, if the model already knows in general how to perform the task it was developed for, then it needs only a little more input to do the task in a certain way, and this input could even be dependent on other circumstances or statistically varying within a range.

It is really much easier to explain if we use an example. Let us look at the running model that Kristoffer and Brian developed.

Running model

Watch any crowd of running people and you will quickly realize that different people run in very different ways. The running style also depends much on whether we are sprinting or running a Marathon. Despite these differences, running is a clearly distinguishable motion and we can recognize it easily when we see it. So there might be more similarity than difference to the various styles. The aforementioned two students and I decided to create an ADL model of running.

Running is also a complex motion, so it is not going to happen as a model unless we have some mocap data to begin with. Brian and Kristoffer collected 143 C3D files from various sources, and finally 90 of them turned out to produce reasonable running models. Some of the problems with the remaining models were too much marker dropout or too little of the running cycle recorded.

We then processed all of the models through AnyBody, resulting in the following:

  • Anthropometric data for each subject, i.e. lengths of the segments. This comes automatically from the system’s parameter identification when it processes the marker data.
  • Anatomical joint angle variations for the running cycle for each subject.

Now we could recreate each subject’s anthropometry and running pattern in the system and proceed to do some statistics on the motion patterns. We initially thought that it would not be too hard, but it turned out that there were more steps to the process than we had imagined.

Data reduction and fixing

In the true spirit of ADL models, we aimed to make a parametric model of running, so that we can recreate all sorts of running in a single model. Just one single running model is driven by thousands and thousands of numbers, so we need a vast amount of data reduction. We are going to do this by principal component analysis (PCA) but, in such cases, it is always wise to reduce complexity by smart decisions from the beginning.

The first such decision was to reduce the complexity of each joint angle variation by approximating it with an analytical function using a small number of parameters. Fourier expansions are the obvious choice because running is a cyclic motion. It turned out that all joint angle movements could be approximated precisely by just a small number of terms in the row, maximally 5, and in most cases much less. So each joint angle movement was now represented by less than a handful of numbers.

The transfer of data to Fourier rows carries some additional opportunities with it. Several of the macap trials contained less than a full running cycle, and the trials came from different labs with the subjects running in different coordinate systems. Also, some were running on treadmills, and some were running overground. With the Fourier rows, it is relatively easy to transfer all subjects into the same coordinate system, make the motions symmetrical between right and left sides, and convert all of them to be treadmill running. Of course, this means that the data set does not allow for investigation of asymmetry in running. Finally, we made sure that all movement functions for each trial had the same basic frequency. All of this process we refer to as data fixing.

If you are choking on your coffee now because you think we are messing too much with empirical data, then please remember that the point of this exercise is not to reproduce how any particular person is running, but rather to obtain a data set that spans the parameter space of running.

Ensuring foot contact

We now have parametric models of different people with different sizes running in different ways. For the further use of the model it is important to make sure that each model obtains proper ground contact with the feet. This might not automatically be the case in a parametric model, because the model is driven from the pelvis and outwards. If we, for instance, make the model shorter, then the feet might not reach the ground.

smalllarge

So we recorded the foot motions for each subject, performed another curve fit, and parameterized these curves such that the feet would always touch the ground in the stance phase.

PCA

We now have a big table in which each row represents a running trial, and each column is a Fourier coefficient for the trial. The running style might actually depend on anthropometry; it is not unreasonable to suspect that people with longer legs tend to take longer steps. So we added to each trial the segment dimensions of the subject in additional rows.

Despite all the reductions we were left with 197 parameters describing the running trials.We could go ahead and start playing around with each of those parameters to see how they influence the model. However, this would not be statistically sound for a couple of related reasons:

  1. There is no way that 90 trials can adequately span a space of 197 parameters. We would need many more trials if we wanted the trial space to support 197 uncorrelated parameters.
  2. The parameters are statistically correlated with each other. For instance, the running speed and step length are known to correlate with the elevation of the foot in the forward swing. So random variations of parameters are likely to create absurd motions that do not exist in reality.

Principal Component Analysis is the go-to method to figure out how many independent parameters we need to describe the running motion. So we ran the table of trial parameters through PCA and found that the first three components accounted for 50% of the variance in the data set, and 90% of the variance could be explained by just 12 components. This is illustrated in the figure below.

pca

Let me briefly explain the nature of PCA to those not familiar with the technique: Each of the principal components is a vector of the original parameters; it designates a principal direction in the data set. The principal components are uncorrelated, so we can vary each one independently, i.e. travel along its direction in the parameter space. PCA also tells us how far it is reasonable to vary each vector, because it gives us the standard deviations in each component direction.

Obviously, the first one is the more interesting in the sense that it accounts for almost 30% of the variation. Let us begin the exploration by taking a look at the average running pattern. This pattern is found exactly in the centre of the parameter space of the running trials. I think you will agree that the analysis has reproduced what appears to be a mainstream running motion.

averagerun

We now take the first principal component and displace it by two standard deviations in the positive direction. This seems to produce a running pattern that much less intensive than the average. This guy or girl is really jogging.

slowrun

As expected, changing the first principal component two standard deviations in the opposite direction creates a fast, intensive running motion.

fastrun

We can compare the slow and the fast running by overlaying a couple of keyframes, First we look at the stride:

stride

…and then at the elevation of the heel in the forward swing:

heel

We can see that, as expected, the strides are longer and the heel elevation is higher for fast running. There are also some surprising elements that may indicate that we have to adjust the data processing a bit. It looks like none of the models fully extend the knee. This could be due to a necessary adjustment of the assumed marker locations on the models and would have to be investigated further.

Outlook

We still have to do a lot of data mining left to figure out the physiological significance of the principal components. There is also much work remaining on automation of the data processing. Ideally, we want to create a C3D database of running trials that we can just add new trials to, and the whole processing is repeated automatically. Right now, the curve fitting and coordinate system transformations still need some manual intervention.

The applications of models like these are almost endless. With the parameter space we could:

  • Identify plausible full running patterns for individuals about whom we only know a few things like their size and stride length.
  • Add kinetics in the form of ground reaction force prediction, which we know that we can do reliably in AnyBody.
  • Compute muscle and joint loads as a function of virtual running styles.
  • Offer modellers the opportunity to investigate running without experimental input and ask the model what-if questions.

 

 

 

 

 

 

 

 

 

Up to the challenge

Last week, everybody who is somebody in biomechanics gathered for the 7th World Congress of Biomechanics in the great city of Boston, Massachusetts. Several national and continental organizations serve the biomechanical community, and they all have their own congresses, but every four years all of them get together in one big and all-encompassing conference. This time, the World Congress hosted more than 5000 delegates.

For me personally, and for several of my closest colleagues, this was a very positive event. There is a much-increased interest in simulation methods in general and in the AnyBody Modeling System in particular. It has been a long time coming, and I have felt at times that we had to explain over and over why it is reasonable to believe that the laws of mechanics apply to the human body and that we would, despite challenges, get it right if we stuck to that belief and kept refining the models and software.

This year, many applications of AnyBody were presented by independent research groups, some of them by unrelated initiatives and some under a competition named “The Grand Challenge”. A visionary group of scientists headed by B.J. Fregly, Darryl D’Lima and Thor Besier have now five times published data sets containing in-vivo measurements of knee forces from an instrumented implant and challenged the simulation community to predict these forces. Every generation of data contains a new activity, and the contestants initially do not know what the correct forces are, so it is a truly blinded test. After the estimates have been submitted, the true values are revealed, and the participants are now challenged to improve their models and predictions.

For this year’s competition, my colleague Michael Skipper Andersen headed a strong group of scientists related to the TLEMSafe project, more precisely Marco Marra, Valentine Vanheule, René Fluit, Nico Verdonschot and yours truly. Not only did we win; our prediction was the closest in the history of the competition. The second place went to a Korean group also using AnyBody but with a completely different model. This should finally silence any doubt that musculoskeletal simulation indeed can simulate forces inside the body.

GrandChallenge

I often compare the evolution of musculoskeletal modeling with the development and adoption of finite element analysis. When I was a student in the 1980’ies, I was extremely fascinated by the possibilities of finite element analysis for the solution of engineering problems when, and I did my PhD in this field, developing my own shape optimization system and its associated finite element solver bottom-up in Pascal. It was quite challenging at times, due to the lack of programming tools and debuggers but also because of the lack of understanding. Many older professors completely misunderstood the project, even when the results began to appear, and I remember particularly one instant when a slightly cranky one cornered me and started shouting agitatedly into my face that “this finite element shit will never amount to anything – ever!”

Time proved him wrong and very few of the hi-tech products we use today, from mobile phones to wind turbines, could have been developed in the absence of finite element simulation in the design process. I have felt since the beginning of the AnyBody project that musculoskeletal modeling is on the same path and holds the same potential.

To accomplish that goal we must also look to the way CAE in general is used: We rarely make finite element models of bridges that are already built or last year’s car model. We make models of future bridges or the bodies of cars to be marketed in three or five years; we simulate the future. The real challenge of musculoskeletal modeling is to make the technology reliable enough to be used for prospective analysis. So far, most musculoskeletal models have simulated retrospective situations for which experimental input data, for instance from motion capture and force platforms, is available, i.e. the past. This might be interesting for research but it is not what makes the technology valuable to a large group of users in healthcare and in the industry. The real potential is simulation of situations that may happen in the future: the outcome of a possible surgical procedure, the behavior of a new type of joint prosthesis, the ergonomic quality of a new hand tool or working environment being designed.
To make this happen, we must meet at least three additional challenges:

  1. Models must be able to represent individuals for healthcare applications as well as statistical cross sections of the population for product design.
  2. Models must be independent of force input, typically from force platforms.
  3. Models must be independent of motion input.

These three challenges will define much of the research of my group in the forthcoming years. Let me try to give a brief status on this:

Statistical shape modeling was a big topic at the WCB2014 and in the biomechanical community in general, and this will eventually benefit AnyBody models. We have also with good partners made much progress in the realm of the TLEMSafe and A-FOOTPRINT EU projects in terms of individualization of the models. We can do it, but it takes time and the workflow must be improved.

AnyBody relies on inverse dynamics, which has a legacy from classical motion analysis. It is therefore a popular misconception that force input is absolutely needed. This has never been the case due to the very general mechanical formulations used inside AnyBody, but we have used force platform input when it was available because we thought that it is better to use real data when we have them. At WCB2014, Michael Skipper Andersen with co-authors Fluit, Kolk, Verdonschot, Koopman and Rasmussen presented the paper “Prediction of Ground Reaction Forces in Inverse Dynamic Simulations”, which very convincingly shows that ground reaction forces can be predicted with great accuracy from kinematics and without increased computation time.

The final, and severest, challenge is to predict motions. Saeed Davoudabadi Farahani from the AnyBody Research Group recently had a paper accepted that convincingly predicts cycling motions, and another paper on squat jump motion prediction is under review. Stay tuned for those. The status in this field is that we can predict simple motions reliably, but the computation times are still too high and there are open questions regarding prediction of abstract working tasks.

We will not run out of scientific challenges any time soon, but we are up to them and WCB2014 shows that we have come a very long way.

I wish everybody on the Northern Hemisphere a great summer vacation.

20/20 vision

They say that hindsight is 20/20 vision; we are always so much smarter when we look back than we were when we tried to navigate forward in a difficult world. My famous compatriot, Søren Kirkegaard, said it better than anybody else: “Life can only be understood backwards; but it must be lived forwards.”

The end of the year is usually the time to look back and evaluate a little. A few years ago, I helped starting a research project called AnyBody Inside in cooperation between AnyBody Technology and the research group that I am heading at Aalborg University. The purpose of the project, funded by the Advanced Technology Foundation, was to prepare the AnyBody Modeling System to be used “inside” in several ways: Inside other software and for applications inside the human body such as surgical planning, design of joint replacements and dimensioning of trauma devices. Along the way we were also fortunate enough to get invited into several EU research projects by outstanding project coordinators: SpineFX, A-FOOTPRINT and TLEM/Safe are all projects that contributed enormously to the development of AnyBody.

Before we got to this point, we had been thoroughly discussing the direction of our research and development. The analysis showed that out of all the directions we considered, orthopedics was by far the more difficult technically and, yet, that was the way we went. In hindsight it may not look like the smartest choice to head directly towards the highest mountain we could see, but that choice says much about the resilience and determination of the people I am working with. The fact that they have now climbed the mountain says much about their unusual skills. Over the past decade, they have created a world-leading technology for musculoskeletal simulation using an amazing ingenuity and solving problems nobody have previously been able to tackle. I apologize for a bit of self-indulgence at the end of the year, but I think this is a good time to review some of those achievements: some that are already out, and some that will be published and marketed in the New Year:

We have been integrating AnyBody into workflows that are used by engineers and scientists. AnyBody is no longer a stand-alone technology. Functional environments, i.e. assemblies, can be imported from SolidWorks into AnyBody and hooked up with the human musculoskeletal model. The video below shows one of my favorite models conceived in the wonderfully curly mind of my genius colleague, Moonki Jung. I call it the Steampunk model, but it really just demonstrates how a mechanism developed in SolidWorks can be imported with kinematic constraints, CAD part geometries and even surface colors from CAD to AnyBody.

In a downstream data direction, we have also integrated AnyBody with finite element software. The idea is that most of the daily loads on the skeleton come from muscle forces, so they are essential to include in any realistic FE model of bones and joints. We even published an investigation of that importance (Wong et al. 2011). Any FE code can be fed forces from AnyBody with a small effort of user interface scripting, but if you are lucky enough to use Abaqus or ANSYS, then you can benefit from more automated interfaces. The image below is from a recent paper that analyzes the stresses in a clavicle fracture fixation device (Cronskär, Rasmussen & Tinnsten 2013).

claviclestress.tif

Without going too much into technical details, I can say that the type of analysis performed by AnyBody usually requires an assumption of idealized joints, for instance a hinge for thee knee. That may be fine for ergonomics, but any anatomist will agree that the knee is far from a hinge if you look a little closer. It is much more complex and some of its movement is due to elastic deformation of soft tissue such as ligaments and cartilage. In AnyBody we have a few people exceptionally skilled in mechanics and mathematics, Michael Skipper Andersen, Michael Damsgaard and Daniel Nolte. Together they developed the necessary surface contact algorithms and an entirely new biomechanical analysis method: Force-dependent kinematics. This allows, for the first time ever, to perform analysis of very complex musculoskeletal systems with hundreds of muscles and detailed modeling of complex joints such knees, shoulders (GH joint) and tempora-manidibular joints. This is already available in the AnyBody Modeling System, and a publication about the technique is being prepared. The video below shows the deformation in a knee simulated with FDK. Notice the shift in the joint in response to the impact force at heel strike just at the beginning of the video. 2014 will bring lots of really interesting applications of this new technology.

The whole purpose of a simulation technology is to predict how the world works, in particular what will happen if we intervene in some sense: perform surgery to correct pathological gait, implant a joint replacement, change a workplace or alter the design of a bicycle. Some of these interventions will change the movement pattern, and movement is input to the type of analysis we perform in AnyBody. So how can we compute something that we need as input for the same computation? It seems like a catch 22 problem, and this is where posture and motion prediction comes into the picture. Saeed Davoudabadi Farahani is working in this field, which is still in its early days. We are trying to prove that optimality principles can predict the way we move, and the results are looking good. Below is a picture of a predicted squat jump. Stay tuned for more on this in 2014.
SquatJump

The final issue I want to mention is verification and validation, V&V. Any technology with aspirations to get used for serious tasks should go through these processes and particularly Morten Lund has been looking into this and has published a rather comprehensive review. One of the things we have found is that V&V is an ongoing process. You cannot just V&V a complex software system and then tick it off as done. So we have been developing what we call a validation engine. This tool runs new software versions and model library models through a comprehensive sequence of tests and compares the results with previous results as well as published experimental data. I fully expect that this will set a new standard for V&V in musculoskeletal systems in 2014.

Oh, and 20-20 vision is also what I expect from myself after a small piece of clinical biomechanics performed on my eyes on January 15th: I shall have the lenses in my eyes replaced. They have become rigid as it happens to us all around the age of 45 or 50, so I am unable to focus on more than one distance and must use glasses. An advanced laser will pulverize the stiffened contents of my current lenses, and a surgeon will remove the debris and insert a new, multifocal lens that hopefully will restore my vision to its former strength. This procedure is performed more than 15,000 times per year in Denmark alone and is a wonderful example of how science is improving the lives of ordinary people. Let’s use our biomechanical skills to do the same with osteoarthritis and other musculoskeletal diseases in 2014.

References

Cronskär, M., Rasmussen, J. & Tinnsten, M. 2013, Combined finite element and multibody musculoskeletal investigation of a fractured clavicle with reconstruction plate, Taylor & Francis.
Wong, C., Rasmussen, J., Simonsen, E., Hansen, L., de Zee, M. & Dendorfer, S. 2011, “The Influence of Muscle Forces on the Stress Distribution in the Lumbar Spine”, The Open Spine Journal, vol. 3, pp. 21-26.

Natal – coming together

Biomechanics like every scientific field has its main conferences. The International Society of Biomechanics hosts a biannual congress that attracts around 1000 researchers from literally all over the world, and these conferences are among my absolute favorite events.

This July the event was in Natal, Brazil. It had to be Brazil, of course, with its emerging economy, enormous natural resources, great advances in technology and science and wonderful weather. The country has become a megatrend lately. Biomechanics has its megatrends too, and no place is better to catch up with them than the ISB conferences. I will mention three of them here:

OLYMPUS DIGITAL CAMERA

Me doing my stuff at the satellite symposium on computer simulation.

Muscle synergies – neuroscience meets biomechanics

It is no secret that the central nervous system is very advanced. Christof Koch of the Allen Institute for Brain Science has nominated the brain “the most complex object in the known universe”. Of course it is debatable how and whether we can delimit objects in the universe, and one could argue (and prove in psychological tests) that several brains together work better than one. But despite the central nervous system’s amazing abilities, when you understand a little about the complexity of the control problem that must be solved in real time just to make a human walk and stay in calculated balance (or calculated imbalance, actually) it is still rather impressive how our sensory-motor system manages, and much research is devoted to this problem.

From the neuroscience perspective, scientists are measuring signals that travel up and down the neural pathways and making detailed models of them. It was nice to see a few of those presented at the conference. From the biomechanics side we try to understand how muscle forces come together to provide exactly the amount of force at exactly the right time to keep us on our feet, considering the three-dimensional dynamics we are subject to, including friction, gravity, contact forces, perturbations, acceleration, centrifugal, gyroscopic and Coriolis forces. Setting up the equations is mind boggling to most of us, let alone solving them in real time. Neuroscientists and biomechanists are addressing the same problem from two separate sides, and they may be just about to meet in the middle.
All that analysis of neural signals has brought about a new idea that is spreading like a wildfire at the moment: muscle synergies. From a special statistical processing of the signals to different muscles it is possible to realize that they are not independent. It looks like the vast majority of the signals, and we are talking several dozens, can be described by just a few free variables. If the central nervous system can make all this happen through modulation of just a small number of variables, then perhaps it is not so strange that it can manage the complexity.

Muscle synergies were the topic of several very interesting presentations and indeed also one of the focuses of my work at the moment, so stay tuned for more info in future blog entries.

Forward, inverse, static, dynamic

Other approaches that seem to come together are the different solution methods for musculoskeletal systems. The issue is that Newton’s equations allow us to find the movement if the know the forces or find the forces if the know the movement. These two different approaches have been called forward and inverse dynamics respectively, and the latter is sometimes called static optimization, which in my opinion is misleading. I’ll try not to go off on a tangent about that but merely mention than scientists tend to like their own approaches, often just because it is a hassle to try out different approaches, so a lot of arguments have been wasted on debating whether one or the other is the best way to go. For a long time, forward dynamics seemed to be chosen by a lot of scientists, but inverse dynamics started its comeback when two of the leading scientists in the field, Anderson and Pandy, published a paper entitled “Static and dynamic optimization solutions for gait are practically equivalent” [1]. Seemingly, two paths can lead to the same goal.

What happened at this conference is that the number of successful results and new approaches based on inverse dynamics seemed to be increased compared to earlier conferences, and several predictive papers were presented. The society’s president, Ton van den Bogert, presented direct collocation methods capable of predicting motion as well as forces, i.e. both sides of Newton’s equations, and my colleague, Michael Skipper Andersen showed how we can predict ground reaction forces in inverse dynamics simulations of gait and how the predicted forces actually lead to much better estimations of hip joint reaction forces. I think these developments are going to bring much more clinical applications of musculoskeletal simulation in the near future.

Clinical applications

Speaking of clinical applications, the coolest presentation from my point-of-view was by René Fluit from the University of Twente. René presented preliminary results from the TLEM/Safe project, in which an exceptionally detailed but generic lower extremity model is made specific to each patient in a surgical planning system, building on state-of-the-art technologies like AnyBody and Mimics. The picture below shows René (barely visible in the lower part of the photo) presenting the generic model (left) and the model morphed to represent a patient with severe pelvis and hip deformation (right). I am really excited about the prospects of this technology for treatment of very disabling diseases.

fluit

Another really amazing practical application of musculoskeletal simulation was by Henrik Koblauch (picture below) from the University of Copenhagen. Henrik develops amazing models of airport cargo loaders’ work situations. These are the guys who load and unload (and accosionally break) your heavy suitcase. They work in impossible postures and under very tight space constraints. Henrik was able to identify certain working postures that are especially injury-prone.

Koblauch

References
1. Anderson, F. C. and M. G. Pandy. Static and dynamic optimization solutions for gait are practically equivalent. J. Biomech. 34:153-161, 2001.

Digital Human Modeling

msmanikin

This week I attended a conference on digital human modeling (DHM) at the University of Michigan. DHM is about all sorts of models of human features and behavior, and biomechanics is an important one of these. All the papers and abstracts are freely available online: http://www.dhm2013.org/.

So what’s new on the DHM side? I’ll try to quickly recapture some of the upcoming trends.
Microsoft’s Kinect camera seems to really be a game changer. Scientists have been working on markerless mocap technology for years without managing the final breakthrough. But then Microsoft put about a million engineers on the task and probably gave them a gazillion dollars to work with, and suddenly we have a depth camera that costs about $100 and potentially replaces a $100,000 motion capture system and provides a number of other interesting features in addition. The conference contained half a dozen presentations about this technology. It is not as accurate as marker-based motion capture, but I am convinced that there are many applications, particularly in ergonomics and human behavior studies, where the accuracy of Kinect and other similar devices will be good enough.

Kinect is a depth camera meaning that it records distances between the lens and a point cloud on the surface of whatever it is observing. The additional dimension in the data means that the camera is also in essence a 3-D scanner, providing information about complex-shaped objects. The second big topic of the conference was applications and methods of processing these point clouds into geometric models of humans.

Point clouds are just a bunch of three-dimensional numbers and they are not actually very descriptive before somebody processes them and extracts useful or discernible features of what they represent. So several papers were about the processing of sets of point clouds and extraction of their descriptive features. Two important mathematical methods are repeatedly used for this: The first is Principal Component Analysis, PCA. It is a method of determining the important dimensions in a multidimensional data set. I will not go into the mathematics, but imagine you have a data set with 50 dimensions, so each feature you are describing would require 50 input numbers. With PCA you can find dimensions in this space that contain the majority of the information even if they are skew compared with the dimensions that originally describe the data. If you use these new, principal directions, you may be able to describe most of the features of the data set with just a few parameters instead of the 50 you started with. This is really interesting when we try, for instance, to make parametric models of facial features, foot shapes or the geometry of an ear.

The other important technology that comes into play with the point cloud data is Radial Basis Functions, RBF. RBF is really an interpolation technology that woks with unstructured data like scattered points, and these functions allow us to morph complex shapes of bones or skin surfaces from one person to another.
AnyBody has also embraces these new technologies. For a couple of versions we have been able to morph model based on sets of bony landmarks from CT or MRI scanners. For this we use RBF.

The final trend I want to bring out is technology convergence in the sense that human modeling systems seem to be getting ready to capitalize on each other’s features, such that users are enabled with a more complete tool. Two papers were about the similarities and differences between the systems and user needs, and your truly demonstrated how we can use advanced kinematic processing to connect otherwise incompatible models. The picture at the beginning of this blog entry, which is created by my colleague Moonki Jung, shows how we can use this idea to connect an ergonomic manikin in a CAD system with detailed musculoskeletal analysis. Now we just have to persuade the manikin manufacturers that this is a brilliant idea. The users already think so.

Huge thanks go out to the Matt Reed and Matt Parkinson and their entire staff for organizing a perfect symposium.

Borderline Biomechanics

Image

Smashing things remains a favorite activity of boys and scientists alike. In my department, we have a very powerful air gun designed and built by my colleague, Jørgen Kepler, and it is a wonderful gadget also for investigations bordering biomechanics. We are very interested in sports equipment, because this has a profound influence the biomechanics of athletes, and if the sport in question is tennis and I even get to smash things, then I am certainly willing to play.

The research has a serious side too. Most tennis players over the introductory level will say that the racket and stringing have a profound effect on the ability to impart speed and spin to the ball, and these are key elements to winning a tennis match. So manufacturers and tennis biomechanists put much effort into understanding how the properties of rackets and strings may influence the stroke. It all takes place in a fraction of a second when the ball and string bed are impacting each other, so the phenomenon cannot be observed live, which brings us to a second piece of really cool equipment: we have a hi-speed video camera capable of up to 200,000 pictures per second. With this camera we can observe very fast phenomena.

So a group of our students, Kepler’s and mine, are setting up an investigation in which we connect experimental data with models of the impact phenomenon. The first experiment is to shoot a ball at a racket and record the impact with the video camera. Kepler came to my office the other day and, with a smirk that failed to disguise his excitement, regretted that he was unable to get the gun to shoot less than 100 m/s (= 360 km/h = 225 mph), and would I have a racket that I would volunteer for the experiment? For those of you unfamiliar with tennis, 100 m/s is well beyond the ability of even the hardest-hitting players.
I do in fact have an old racket that I no longer use, so we went and strapped it up in front of the gun. We angled it by 45 degrees compared to the movement of the ball to see how spin is created, and the spectacular result is shown in the video below.

It looks like several strings break, but it is in fact just a single break, which subsequently puts slack into the neighboring strings. We can also see the mains moving much sideways and the crosses being stretched elastically. This is very interesting because it shows that we may get more spin from the impact than popular tennis theory would indicate.

Well, Kepler did not make it far in science by being a quitter, so a new experiment was set up with the pressure cranked up by a factor of 20 to 200 Bar, resulting in a ball velocity of 260 m/s (= 936 km/h = 585 mph). This time we shot the ball perpendicularly at the racket, resulting in the damage of the picture at the head of this blog. Here’s the footage:

The power of the gun completely destroys the racket as well as the ball. In fact, both have terminal damage even before the impact. The pressure in the gun shoots a hole in the ball while inside the barrel and this is the source of the debris you see flying towards the racket before the ball appears. The pressure wave from the gun deforms the racket and destroys the frame even before the ball hits.

That was fun but I am not giving up more of my rackets for this experiment before Kepler manages to reduce the power of his gun to a more realistic level for tennis. When that happens I am expecting that we can get really good data that we can use to calibrate our simulation model and start predicting the influence of stringing parameters on the impact phenomenon.

All of this is a part of Aalborg University’s master program in sports technology. Check it out if you would like to learn constructive science by smashing things.

As simple as possible…

It was Albert Einstein who allegedly advocated that models should be as simple as possible but not simpler than that. I have been a modeler all my adult life and I very much subscribe to this point-of-view, but I do not always live by it. You see, my research group spends a lot of time developing and perfecting biomechanical models, and they tend to get more and more complex as you can see in the picture below.

Image

This model has a little more than 1000 individually activated muscles and it does not stop there. As I am writing this, my colleagues are working on foot models, hand models, mandible models, thoracic spine models, new knee models and a new shoulder model, all of which will further increase the complexity of what you see above.

Why are we doing that if we want simplicity?

The full body model shown above is more than adequate in complexity for a lot of investigations, for instance many ergonomic studies, and in some cases much simpler models may work fine. The video below shows a 2-D cycle model that was developed in the early days of the AnyBody project. Despite its simplicity I still think it is a pretty good representation of the muscle actions in pedaling.

But if we want to zoom into details of the body, such as a foot or a knee and want to investigate the biomechanical conditions in those, then we need even more detailed models of those parts. I really cannot see model detailing coming to an end any time soon. For many applications, the models we have are still simpler than “as simple as possible”.

In geometric modeling, the dichotomy of simultaneously wanting detailed and simple models has been known for many years. For instance, a single CAD model of a car may contain millions of features (geometric details), but an application targeting the exterior surface of the car body may work better if it does not have to maintain information about the thread in the screw holes in the cylinders of the engine. Modern CAD systems deal with this problem by feature suppression; you can remove entire trees of features temporarily from the model and switch them on again when they are needed.Similarly, finite element models that analyze the mechanical behavior of the car can cope with large elements to simulate the overall vibration modes of the body but need much smaller elements to assess fatigue in the welded connection of the hinge to the door frame. Modern finite element systems can link such models together with substructuring techniques.

So what stops us from maintaining crude models for overall ergonomic investigations and much more detailed models of, for instance, a knee for ACL injury or osteoarthritis investigations? It turns out that it is very difficult to detach body parts from the rest of the body. Finite element models like the car models I mentioned before have the property that local phenomena in the model tend to influence only the local region around them; this is known in engineering as the Principle of Saint-Venant, and it means that we can remove a small hole in the model of the car body and not influence the analysis of its overall vibration modes much. Because of their high nonlinearity, musculoskeletal models do not share that property, and neither do real bodies. For instance, many therapists will tell you that a number of back and neck problems can be cured with shoe insoles. So one end of the model can easily influence the other end, and if we detach the foot from the lower leg or the lower leg from the thigh, neither of the separated parts may work the way we expected. In other words, we need to be very careful about what we are doing when we isolate body parts in models.

One possible solution is to develop popular subsets of the body such as a pelvis and two legs for gait analysis or an upper body and one arm for pitching movements, and to make sure the body parts are reasonably “tied off” in the ends where they are separated from the rest of the body. When we develop our body models, we try to enable this as much as we can by allowing users to select the presence or absence of extremities in the models. We have implemented a whole lot of additional syntactic features in our modeling language to make this happen seamlessly. For more advances cases, the best advice is what applies to modelers in all fields in general: You have to understand what you are doing.

Perhaps in the future, when computers are much faster and more powerful than today, we can simply include the entire detailed body with all its bells and whistles in all our analyses.