Neural networks, a type of machine learning algorithm that work similarly to the way our brains so, are bad at dealing with inputs that are different to what they’ve seen before. Facial recognition algorithms, for example, often fail to recognize the faces of Black people, typically because the algorithms were trained on mostly white faces.
The same can happen with tumor detection. When training a neural network to detect tumors in MRI scans, differences between the scanner setups at each hospital mean that a neural network trained and proficient with data from one hospital performs worse on scans made somewhere else.
One approach to making a more generally useful network is to penalize it for learning information that you’re not interested in. In a study published in the journal NeuroImage, . This arrangement pushed the network to learn ways to identify tumors without relying on scanner signatures specific to any one hospital. When they compared their network to a similar one without this technique, their network did much better on scans from new locations than the old network did.
This technique can be used to make neural networks that work better on new data that is different from the data they saw during training. This has the potential to improve the usefulness of neural networks for real life medical diagnosis assistance.