Ad

Dogs' medical records are helping machine learning researchers track ticks

Text classification turns tick talk into data that even a machine can understand

Victoria DeLeo

Plant Biology

Pennsylvania State University

Spring came early this year, bringing warmth, flowers, longer days…and ticks. Ticks emerge from winter dormancy earlier in warmer years. Reported cases of  tick-borne Lyme disease have increased over the past 25 years and expanded into new geographic regions as the planet warms. And humans aren’t the only ones who can catch Lyme. Our pets, especially dogs, are also at risk.

Getting hard numbers on ticks and where they are spreading isn’t straightforward. About 30,000 cases of Lyme in humans are reported to the Centers for Disease Control and Prevention (CDC) each year, which is only one tenth of what experts estimate the true number of diagnosed cases to be. To build models from incomplete data, public health officials pull from diverse sources: diagnostic laboratories, insurance claims, weather reports. Even infections in dogs can tell us something about infection rates in humans. These aren’t perfect methods, though, and they rely on people actively providing samples or undergoing somewhat invasive tests.

close up of golden retriever dog looking at camera

Your dog's medical records could help science

 Photo by Ryan Walton on Unsplash

Luckily, there might be another option for detecting ticks. A cross-disciplinary team led by James O'Neill at the University of Liverpool recently presented a method to use machine learning to predict ticks’ presence from a pet's health records.

In order to build the method, O’Neill and his coauthors took advantage of the Small Animal Veterinary Surveillance Network, an initiative that has accumulated over 3 million records from veterinary visits in the UK. These records include “clinical narratives” — written descriptions of veterinarians' observations. These notes are rich with information about pet health. But it isn’t simple for a computer to make sense of them. Human mistakes like misspellings and transcription errors can make normal words incomprehensible to a machine. Medical jargon is also often missing from the dictionaries that language processing tools rely on.

To get around these problems, the authors built up levels of meanings for sub-words, words, and combinations of words within the text. They had to create a sort-of dictionary for the algorithm to refer back to when looking for evidence of ticks. Assigning classifications at different levels allowed the machine learning algorithm to recognize patterns, even when the words or phrases didn’t exactly match the examples in the dictionary. For example, "outdoors" and "outside" might be synonyms to a reader but not to a computer. A computer can be trained to recognize the sub-word "out." If being outside increases a dog's chance of getting a tick, the computer can assign higher tick probability to "out" when it is in the same sentence as, say, "walk," and a lower tick probability when it shows up as part of a common and unrelated word like "about."

Cn y rd ths? Most likely you can. There's a lot of information in text that isn't strictly necessary for our minds to understand it. We subconsciously recognize the important patterns and ignore the rest. In this machine learning algorithm, the computer did something like the reverse of what your mind just did. It took the vet records as input, simplified the text by identifying the important features (possible key words like "outside," for example), did this multiple times creating different layers, and then spat out a probability that a tick was present on an animal.

The particular method used here is called a "convolutional neural network," a type of machine learning that got its name because it handles data similarly to how neurons in the eye pick out patterns in visual information. The Liverpool group tested variations on the method, allowing the program to more easily forget the features it identified, and to more easily remember information from layers farther back in the process. Their best version was 84% accurate overall, but only 72% accurate when specifically predicting an absence of ticks.

If 72% accuracy seems low, that's because machine learning isn’t yet ready to diagnose your pet. Artificial intelligence is just starting to be used to diagnose medical images, aid drug discovery, and prevent surgical complications. The method of tick detection described here was only presented at a conference, so the findings should be taken with a grain of salt for now. There’s still a high rate of false negatives with the method the researchers described. Partly this is because there are far more vet records without ticks than with, making comparisons unbalanced. Another shortcoming is that the method only reports tick presence, whereas a blood test can provide information about tick-borne disease directly. Used carefully, however, text mining of dogs’ medical records can be a complementary tool for understanding tick ecology.

longhorned tick

An Asian longhorned tick, a species that was first reported in the US in 2017.

Ticks are changing their behavior and their range in response to changing climates. We can forecast where ticks and tick-borne diseases will be based on where they are now and the environments they prefer, but our predictions are only as good as the data they’re based on. Veterinary clinical narratives offer a complementary source of information to add to our current data. In fact, positive blood tests in vet records between 2001 and 2007 clued scientists in to the possible existence of a not-yet discovered pathogen in Wisconsin and Minnesota. Scientists knew that a bacterium called Ehrlichia chaffeensis could infect dogs, but the particular species is rare in that region of the US. Yet, dogs tested positive at a surprisingly high rate for antibodies associated with the disease, which could happen if the dogs had been exposed to a similar species. Sure enough, human cases of ehrlichiosis were discovered in Wisconsin and Minnesota in 2009, and a new species, Ehrlichia muris euclairensis, was formally described in  2011. 

Comment Peer Commentary

We ask other scientists from our Consortium to respond to articles with commentary from their expert perspective.

This is a really nice breakdown of how we can use machine learning to help us transform otherwise unwieldy data sources into something usable. In particular, there is such valuable information in long form clinical records, but variation in communication styles, abbreviations, etc. are rampant. This is also a great example of the One Health framework: understanding the health of other animals can help us better understand our own.

Ankita Arora

Biochemistry

University of Colorado

Who doesn’t like to read a science article that combines science, artificial intelligence and pets. This is a great attempt to tie together the advantages of machine learning in improving healthcare. In future, maybe improvising machine-learning algorithms by adding in variables such as environmental changes, geographical location might increase the prediction rates even further.

I would also like to point out that pet health charts or notes taken during a visit to the veterinary clinic follow fewer guidelines, as there is no systematic infrastructure built to maintain electronic records like in humans. So, machine learning could also help streamline this process and help with early pet diagnosis eventually. A study from Stanford supports this. This might even help to build an online database with pet records and monitor their health in real-time.

Danielle Nadin

Neuroscience

McGill University

Super interesting read! Traumatic brain injury (TBI) is one of the most prevalent neurological conditions, but unfortunately often goes undiagnosed in mild cases, even concussions.

The symptoms of concussion are so varied, so it’s fascinating to see that GFAP is still able to distinguish between concussed and non-concussed patients with head injury. This makes me wonder what are the common neural mechanisms underlying what seem like very heterogeneous symptoms (e.g. dizziness, fatigue, blurred vision, mood changes). You mention GFAP is involved in maintaining the blood-brain  barrier; does that mean it’s more involved in the physical aspect of the injury, which is common to all concussions?

I was also surprised that patients are expected to self-declare when they think they’ve recovered. This must definitely lead to athletes returning to play before they’re fully recovered. Are tests like the Buffalo Concussion Treadmill Test standard procedure, or are we not there yet?

Like Christina, I think it’s great that Papa’s group measured biomarkers in children, as well as non-athletes. While there are probably more resources and monitoring capabilities available for professional athletes, they’re far from the only ones affected by TBI. Developing better diagnostic tools and measures of recovery could have a huge impact for people who need to ask their employer for a sick leave, for example.