Mouse Grimace Scale - Who's using it?

I’ve seen renewed interest in the Mouse Grimace Scale, most recently at SFN. Some groups are trying to automate the Grimace scoring computationally.

Is anyone using this? there could be value but I’ve heard it’s challenging to reproduce the results of the original paper:

https://www.nature.com/articles/nmeth.1455

@ShanTan: Have you done this?

That sounds like a bit of challenge. :wink: For the record, there are now over 50 published papers showing facial grimacing in animals caused by pain, and grimace scales have now been developed for rat, cow, horse, cat, sheep, lamb, rabbit, pig, monkey, and ferret. In fact, what may possibly be wrong about the original paper is the idea that grimacing only occurs for 24-48 hours. A number of groups have now shown facial grimacing in a chronic pain context, and we ourselves are starting to see that with longer habituation, mice with SNI do indeed grimace.

This being said, I also would be happy to hear about people’s experiences with the MGS or RGS. Software to automate the laborious process of turning video into still photographs is available for free, and as stated above, machine learning algorithms for automated scoring have been developed by multiple groups.

IHi @jmogil
Thanks so much for your response!

I’m so glad that the MGS is being widely used. To clarify: I have no doubt of the validity of the original results. I consider your lab’s behavior analyses as the gold standard basically. What I mean is, the MGS looks hard to do for the untrained eye (e.g Me) and I was wanting to know how difficult it would be to implement for a novice. It appears that it’s not as unapproachable as I thought, which is great. MGS looks very powerful and I want to apply it to my behavior assays, where a paw withdrawal may not reveal the true state. In particular, I want to couple with optogenetic stimulation of the paw, since in my case, withdrawal occurs but likely isn’t “pain”.

I’ll let you know how it goes. But I’m encouraged to try it now.

Right. I guess “challenging” means two completely separate things! It’s actually pretty easy to learn to see the action units. The hard part, frankly, is getting good-enough optics to see the whisker changes, but some groups simply drop that and use four action units instead of five, and things still work out well.

@jmogil Sorry for the confusion. Thanks again for contributing to this forum!

Do you have a recommendation for a camera or setup that would enable one to see the whisker changes? We talked a little bit about that here:

Also, I think more labs are appreciating the subtlety of the mouse behaviors we look at, and just looking at binary “yes/no” withdrawals will fail to capture the entirety of the mouse’s sensory experience. Hence, my interest in MGS and other additional assays.

I’m looking forward to the implementation of approaches like this to discover behavioral elements we may not even be looking for right now.

Not really a recommendation, but I can get you in touch with Susana to tell you what we use if you want. -Jeff

Hi @achamess,

I employed the RGS in my research on headache in the past year and I must say it has worked really well so far. I agree with what @jmogil said above about clear enough optics to see the whisker changes so I tend not to use that in my final analysis. It’s pretty easy to see everything else but it does take practice. It’s definitely recommended to have you and others in your lab to score the same images and compare your findings and also to discuss any images that may be causing contention.

@DaraBree: Thanks for chiming in. That’s great to hear. Now I’m even more encouraged to try the MGS. Have you tried the MGS or only with Rats?

@jmogil Thanks Jeff. That’s be great! My email is agc14 at duke dot edu. Alternatively, if Susana is willing, I’d be honored to have her share her expertise in this thread on the forum.

Only with rats so far

I’ll let you contact her directly: susana.sotocinal@mcgill.camailto:susana.sotocinal@mcgill.ca.
-Jeff

Thanks @jmogil! I’ll get in touch.

The grimace scale reliably assesses chronic pain in a rodent model of trigeminal neuropathic pain

http://www.sciencedirect.com/science/article/pii/S2452073X17300223

Hi everyone,

I’d potentially like to try the MGS as I think I it would be interesting for a project I have. Does anyone have a training dataset I could use with your permission? People mention the images included in the article, but I’ve only found the figure explaining the different facial features.

Thank you!
Samuel
samuel.ferland.3@ulaval.ca

Hi @Sferland
You may want to reach out to @jmogil or also Mark Zylka @ UNC. Some people in Ted Price’s lab at UT Dallas also may be able to help. @sshiers @CandlerPaige

Hi @Sferland,
I’ve started using MGS in almost all of my experiments in addition to von Frey. We don’t have a training dataset, per say, but I found it was pretty easy to learn. I just pulled up Figure 1 of the original article as I scored each mouse for the first several experiments I ran, and it ended up working really well. I think MGS is an awesome addition to pain behavior protocols, so if you have any other specific questions I’d be glad to help! Just let me know.

Candler

1 Like

@CandlerPaige
Thanks for sharing your experience.

What kind of containers are you all using? And camera?

We’re using Samsung HMX-QF20 HD Cameras in the lab. Here’s a link for specific details:
https://www.aztekcomputers.com/HMXQF20BNXAA-SAMSUNG-3536785.html

They work really well for ICRs, but the C57 faces can be a little difficult to pick up with these specific cameras. We’ve thought about trying out the new GoPro Session5 since they’re cheap, tiny, have great image quality, and can upload directly to our computers - so that might be a thought if anyone’s in the market for a camera for MGS.

As for containers - we just use von Frey racks that we make. They’re clear acrylic and the individual slots are 2.5in wide and 3.5in deep.

1 Like