Ad

This lab-grown meat grows on spinach skeletons

For lab-grown meat to replace a fresh steak, it needs to look like one

In the last decade, lab-grown meat has emerged a sustainable alternative to traditional livestock methods. Livestock strain Earth's land resources and account for about 14.5 percent of global greenhouse gas emissions. But while scientists can grow thin sheets of cow meat and scrape it together to form a patty, people eat with their eyes as much as their mouths. For lab-grown meat to replace a fresh steak, it needs to look like a steak. 

Growing lab-based meat into 3D structures is difficult because it needs constant delivery of oxygen and nutrients. In living organisms, vascular systems fill that need. Researchers at Boston College previously showed that skeletonized spinach leaves, stripped of everything but their veiny, oxygen-dispersing, vascular system, can support patches of heart muscle cells. Now, they show that lab-grown meat can grow on skeletonized spinach, an essential step to growing steak-shaped meat in the lab.

To skeletonize the spinach leaves, the scientists “decellularized” them, stripping away the greenery and leaving behind a translucent ghost of a leaf. Then, the scientists spread cow muscle cells on the ghostly leaves, like butter on fresh bread. After two weeks, the cells not only survived and multiplied, but also organized into long strands of muscle fiber. These long strands are the building blocks of steak — whether from a cow or from a spinach leaf.  

Lab-grown meat is a technological solution to the environmental crisis. And while we need new and better technology (think, solar panels and battery storage) to change the course, the technology also needs to maximize environmental sustainability. Using spinach, which is in itself environmentally sustainable, doubles down on the sustainability of lab-grown meat.

Deep sub-surface "microbial dark matter" hasn't evolved since Pangea

The ancient microbes have survived brutal conditions for millions of years and hit pause on evolution

At two miles below ground, the sun last touched the buried rock when carbon dioxide filled the sky, before the days of Earth's oxygen. Drops of water formed time capsules for early microbial life to survive the deep sub-surface, their methods and madness hidden from Earth's surface for millions of years. Despite accounting for about 10 percent of the planet’s total biomass, we know little of these organisms, which scientists have called “microbial dark matter."

Until recently, our understanding of microbes was limited to those that could grow in a lab. Advancements in genome sequencing and culture techniques have now brought light to the darkness, and from the shadows, microbial secrets emerged. Some survived on the buried remnants of photosynthesis, while others house tools unlike any life on the surface.

One species was Candidatus Desulforudis audaxviator, or CDA, a sulfur-breathing microbe that has spent the last several hundred million years in total isolation, its only companion the radioactivity spilling from its rocky confines. Now, researchers from the Bigelow Laboratory for Ocean Sciences have found that 165 million years ago, CDA abandoned the very engine of life on Earth: evolution.

Scientists originally discovered CDA in a South African gold mine, and later in both North America and Eurasia. This geographic separation let researchers study how CDA evolved after millions of years. The team used DNA sequencing tools to read the genomes from individual cells. Strikingly, the CDA genomes from all three continents were nearly identical.

While cross-contamination was obvious initial explanation, the team found no evidence of CDA spreading by air, land, or sea. Nor did the microbes stall as spores. All were actively respiring and replicating. After ruling out all of these possible reasons for their results, the researchers concluded that as the supercontinent Pangea split, between 55-165 million years ago, these microbes hit pause on evolution.

CDA is a living fossil, subverting evolutionary change yet surviving millions of years of changes to our planet, including a mass extinction. How CDA managed an evolutionary standstill — perhaps through a meticulous replication process – may have immense application in biotechnology. It may also upend our understanding of microbial evolution. What other secrets to survival might microbial dark matter be hiding?

Yuning Wang

Biochemistry and Structural Biology

University of Western Ontario

Bats are mysterious creatures: They're the only flying mammals, and are much more resistant to the viruses that affect us and other species. Another mystery is their exceptionally long lifespans, which could be more than four times greater than similar-sized mammals. What’s more, bats show very few signs of aging, making it difficult for people to accurately estimate their chronological age.

DNA methylation is a biological process where genetic material accumulates small molecules that affect cells express genes. Assessment of theses "epigenetic" changes has been used as a tool to predict lifespan in humans, determine ages in animals, and understand how and why we age.

Scientists recently were able to estimate the age of bats by analyzing their DNA methylation levels. The DNA extracted from their wing tissue provides an accurate indicator of a bat’s age as well as a predictor of their longevity. The study discovered that the longer maximum lifespan of bats is more closely associated with the lower rates of change in DNA methylation with age, rather than body size.

Subsequent analysis of bat genomes helped researchers to identify regions of genes that may be linked to bats age and longevity. The genes with the highest levels of methylation are found to be mostly involved in innate immunity and cancer suppression, including several proteins that regulate the immune response in bats. The results suggest an explanation for why bats tolerate a lot of deadly viruses, such as Ebola, rabies, and SARS-CoV-2 coronavirus, and they could someday be used for human benefit.

Google has shown that they don't actually care about ethical AI

AI algorithms are racist and sexist because these biases are baked into our society. We need to fix this

Simon Spichak

Neuroscience

It is a well-known fact that artificial intelligence (AI) has racism and sexism baked-in: after all, a model is only as good as the data we train it with, and our society is systemically biased against people who are not white and/or male. 

In 2017 Google developed an Ethical AI team to better understand and mitigate these problems, co-led by Timnit Gebru. In 2018 Gebru co-authored a foundational study about racism in AI that demonstrated that facial recognition algorithms are prone to more error on dark-skinned faces than light-skinned ones. In late 2020, she authored a paper critical of large language models. Without proper oversight, our own biases are integrated into these language models. Google uses many of these algorithms across their products. 

Gebru's work went through special sensitive topic review at Google, which required the paper be reviewed by their legal, policy, and PR teams. In at least three cases, they requested a positive spin on research that identified racist or sexist aspects of their algorithms.

Gebru was asked to retract her paper by Google, and faced retaliation when she refused. She was ultimately fired while on vacation in late 2020 by the head of Google's AI division, Jeffrey Dean. 

Over 2,600 Google employees signed an open letter in sup at Google, which required the paper be reviewed by their legal, policy and PR teams. In nslaught of harassment and racism from online accounts. Detractors threw racial slurs and insults, as they claimed her research was "advocacy disguised as science."

Her manager, Margaret Mitchell remained critical of Google's firing of Gebru and their attempts at promoting diversity, pointing out that Google is more than happy to meet with historical Black colleges and universities (and to promote these meetings), but the company does not support its Black staff identifying potential issues in their AI algorithms. Mitchell was fired in February

These shameful actions by Google show why large technology companies should not exist without any oversight. When racist and sexist algorithms permeate every aspect of our lives, their subtle biases turn into tangible health, economic and societal harms.  

2021's wildfire season could be even more devastating to the western US than last year's record-breaker

Fuel moisture data collected by researchers at San Jose State University indicate this year is already very dry

Kristen Vogt Veggeberg

Science Education

University of Illinois at Chicago

The West Coast of the United States, ranging from northern Washington to the Mexican border, faced one of the most devastating fire seasons ever in 2020. Fire season occurs during the hottest and driest months of the year — between July and November — in California and the Pacific Northwest. In California alone last year, 9,917 fires burned. 

The horrifying devastation to the forests of the West Coast were just some of the many tragedies 2020 brought the world. Even temperate rainforests, such as the Willamette National Forest in Oregon and Big Basin Redwoods State Park in northern California, were subjected to horrific infernos that devastated the forests, burning millions of acres and costing billions of dollars’ worth of damage. Now, scientists at San Jose State University are warning of an even worse fire season in store for 2021.

Rainfall and other forms of precipitation in January and February often show how bad a fire season will be, as higher amounts of rainfall will stop larger fires from occurring. However, the researchers at the SJSU FireWeatherLab, which keeps watch over weather in areas sensitive to fires in California, have currently seen one of the lowest amounts of fuel moisture during the late winter of 2021. This means that less precipitation and dew have been keeping the mountains and surrounding forests at a lower risk for wildfires. The data collected in the Santa Cruz Mountains by SJSU researchers can serve as an example of other mountain environments on the West Coast. It signals an upcoming fire season that may, without precautions, become even more deadly than the record breaking one in 2020. 

T. rex walked as slowly as a human, and may have used its tail as a suspension system

In prehistoric times, you could have strolled down the street and chatted with your friendly neighborhood T. rex without even breaking a sweat

Margaux Lopez

Astronomy

Vera C. Rubin Observatory

The mighty Tyrannosaurus rex has just been taken down yet another peg by some meddling paleontologists. While we already knew that a running T. rex would not have been fast enough to catch the fleeing jeep in Jurassic Park, its preferred walking speed has also now been downgraded from a brisk 6.7 miles per hour to a sluggish 3 mph, easily achievable for an average human. In prehistoric times, you could have strolled down the street and chatted with your friendly neighborhood T. rex without even breaking a sweat.

This new estimate is based on the motion of the T. rex’s flexible tail while walking. The tail acts as a counter balance for the T. rex’s giant body but also as a suspension system, storing and releasing energy as it sways up and down with each step. Researchers used a model of the tail’s motion to figure out how fast it would have moved up and down in its natural lowest-energy state, which corresponds with the preferred walking speed. Once you know how often a T. rex takes a step and estimate the length of each step using the animal’s overall size, you can calculate how fast it is moving. And the answer is not very fast.

To its credit, this new flexible tail model could actually have the opposite impact on estimates of the dinosaur’s maximum running speed. That top speed is currently thought to be limited by bone strength, since supporting 6 tons of body weight creates a lot of stress. Using the tail as a shock damper would allow the T. rex to run a fair bit faster than previously estimated without shattering its bones under its own weight. More research is needed to figure out exactly how fast that is, but you’re probably still pretty safe in a jeep. 

Scientists harness a rare true blue color from nature — in red cabbage

Chemists gave this natural pigment an extra blue boost with aluminum ions

Josseline Ramos-Figueroa

Chemistry

University of Saskatchewan

Think about last time you were at the grocery store looking around for fresh vegetables. All the items you probably saw were green, red, yellow, or orange. But what about blue?

Blue is quite uncommon in nature and this fact has baffled scientists for several years. Most colors we see in everyday life comes from small molecules called pigments. Pigments absorb a set of wavelengths of light and reflect the remainder, thus producing color. While uncommon, some birds and butterflies, a rare berry type Pollia condensata, and eyes can be blue. But they are all blue for a different reason. Their blue comes from nanosized structures that scatter or reflect the light wavelengths of the color we see.

Red cabbage leaves are colored dark red or purple because of the pigments in them known as anthocyanins. In 2016, researchers at the Ohio State University used an anthocyanin extract from red cabbage and found around eight different types. Remarkably, among these was a tiny amount of a blueish anthocyanin pigment. In April in the journal Science Advances, scientists report turn this pigment into a true blue with a neat chemistry trick.

The international team of researchers described the full molecular structure of this interesting blue pigment, named P2. The pigment molecule is enormous and has several sugar molecules attached to an anthocyanin core. On its own, it is not strongly blue, but P2 can be turned into a “true” blue pigment by adding aluminum ions to it. 

The charged aluminum ions form a stable molecular complex with three big P2s clamped to a central metal aluminum ion in a “Y” shape. And since they revealed the intricate structure of P2, the researchers could also produce more of it than red cabbage normally does. They used enzymes to convert similar, but more abundant, pigments into P2. By using these tricks, this new blue complex pigment can now be produced in larger amounts and be investigated for future use in the food industry.

Rare orchid lures in beetle pollinators using deceitful sexual bait

Longhorn beetles deposit sperm in orchids they pollinate, but why?

Sahana Sitaraman

Neuroscience and Behavior

National Centre for Biological Sciences, India

Rooted down at one spot, unable to attract their "mate" or move towards them, plants are at the mercy of animals and the environment to aid pollination and subsequent reproduction. Insects and other animals are responsible for the pollination of 60 to 90 percent of plant species. This is made possible largely due to the food rewards that pollinators get from plants in the form of nectar, fruits and vegetables. But some species exhibit uniquely different methods to attract their pollen shippers, often luring them with a deceitful sexual bait.

An exceedingly rare species of orchid, Disa forficaria, found in southern Africa, does not produce nectar. Yet, it is reliably visited by males of the longhorn beetle, Chorothyse hessei, which carry away its pollen packets attached to their underside. Before they depart, though, the beetles exhibit vigorous copulatory behavior perched on top of the orchid flower. They bite the furry antennae-like petals and extend their aedeagus — an arthropod equivalent of a penis — to fit into a floral notch. These visits often end in ejaculation, with sperm being deposited at the floral tip. The beetles are under the false impression that the orchid flowers are females of their species. But why?

Callan Cohen, from the University of Cape Town, who first observed this behavior in the longhorn beetles, investigated the reason behind this unusual sexual encounter. He separated the chemicals released by the orchid flower and tested the response of the beetle antennae, an organ which they use to detect odors, to each of them. One of these compounds, named "disalactone", produced electrical responses in the antennae of all tested individuals. The odor molecule was exceedingly attractive to male beetles, so much so that disalactone beads treated with it not only drew in the beetles, but also enticed them to try and mate with them. While it is presumed that the scent must be the same as that of female longhorns, until scientists catch and examine one, we won’t know for sure.

D. forficaria plants are so scarce that only 11 sightings have been recorded in over 200 years. Last observed in 2019 in the study region, it was thought to be extinct. But this new research shows it is still out there, budding and thriving.

Tear gas used in 2020 protests wreaked havoc on people's health

The chemical control isn't just a short-term irritant. For many, it caused lasting physical and psychological effects, including menstrual issues

Max G. Levy

Science and Health Journalism and Chemical Engineering

Tear gas is no joke. Despite the fact that it is banned for wartime use by the Geneva Convention, police in the US use the stuff to strong-arm crowds and temper (what they allege to be) riots. But the practice has attracted revised scrutiny in the past year — nationwide protests against police brutality and racism begat more brutality, which hospitalized civilians with injuries from "less lethal" weapons, including tear gas

In a new study, researchers from Kaiser Permanente Northwest Center for Health Research report results from a survey of 2257 adults who attended protests Portland, Oregon. "Almost all respondents," they wrote, had physical or psychological injuries after tear-gas exposure. Nearly 94 percent felt their injuries immediately, and 86 percent had them for days. The study provides scientific support to growing pleas for banning tear gas as a method of crowd-control.

Tear gas is an irritant made of tiny solid chemical aerosols. One common ingredient, chloroacetophenone, is the eye-mouth-throat-lung-skin-burning stuff in Mace. Officials have historically greenlit tear gas as a short-term deterrent — as the chemical equivalent of batting someone's hand away when they're annoying you. But the underlying safety data is decades old and based on exposure in young healthy men

Doctors have known for years that chemical irritants cause serious injuries and even death. This new study presents another compelling case. The survey was anonymous and optional. It took place between July 30 and August 20 and was distributed around Facebook, Twitter, Reddit, and Instagram. The responders reported immediate symptoms like blurred vision and tearing, burning noses, chest tightness and coughing. Hundreds (about 20 percent) even reported rashes and burns.

In contrast with the myth that tear gas is just a short-term irritant, over 80 percent cited delayed health issues. Over half of people who menstruate reported menstrual changes and breast tenderness. The researchers argue that tear gas caused this effect: More days of tear gas exposure led to increased chances of symptoms. The researchers compared survey responses among people who experienced one day of tear gas versus up to five or more days. Mentions of menstrual cramping doubled from 24.3 to 50.9 percent, of increased bleeding nearly tripled, from 14 to 38.3 percent, and of clotting jumped from 3.1 to 20.2 percent.

These new results confirm what many scientists and advocacy organizations have long been saying: Tear gas is brutal on the body. It's banned in warfare, but not still around in policing. Whether the nation's patchwork of powerful municipal police systems will reckon with this fact, however, is another story.

Feral cats' hunting abilities make them particularly effective predators, even when prey see them

Prey can spot cats more often than other similar predators, but that doesn't save them

Enzo M. R. Reyes

Conservation Biology

Massey University

Cats have travelled around the world following human colonization together with dogs and other domesticated animals. Unlike dogs, cats never have been completely domesticated by humans; keeping their predatory instincts and easily becoming feral in new environments to which they have been brought/introduced, becoming a serious threat for native species.

Cats are recognized as one of the main extinction’s causes of native species in several islands including Australia. Research conducted in Australia and published in Proceedings of the Royal Society B shows that despite some animals’ ability to recognize them as predators, their anti-predator response isn’t strong enough against the hunting versatility of cats. The researchers attached GPS to 25 cats and ten quolls (a native Australian predator of similar size and feeding habitats to cats) to track their movements every 5-15 minutes in the middle of Tasmania. 

An orange cat on a patio

Enzo M. R. Reyes

They found that native Australian prey animals are able to find cats 20 to 200 times more often than a quoll. This is due to a triple effect combination of high population densities, greater home-range used, and broader habitat preference. The average cat density found in Tasmania was nine cats per km2 compared to 0.4 quolls. Furthermore, cats were found on all the habitats monitored and used their home range more intensively, revisiting it more than twice than quolls. This increases the probability of prey mortality due to the ineffective prey response against cats, possibly associated to Toxoplasma gondi, a parasite that increases risk-taking behavior and attraction to cat odor.

Despite the pessimistic outlook for native Australian prey species, the study highlights another possibility other than broad-scale cat control. Although cats are present in all environments, there is evidence that hunting success decreases in complex habitats or understory structure. So, habitat restoration of understory fauna complexity might bring fine-scale predation refuges for native species.

A "strange, giant" galaxy is breaking all the rules

Without a supermassive black hole to stop them from forming, why aren’t there any young stars?

Margaux Lopez

Astronomy

Vera C. Rubin Observatory

A century ago, astronomer Edwin Hubble defined two main types of galaxies: spirals and ellipticals. 

Spiral galaxies are formed in regions of space with a high density of dark matter, known as dark matter halos. As cold gas and dust are pulled toward the center of the halo via gravity, the particles collide and heat up, creating the cores of new stars. Spiral galaxies are disk-shaped and have lots of younger stars that give off blueish light. 

Elliptical galaxies, on the other hand, are born when two (or more) spiral galaxies merge, losing their defining structure and morphing into a regular elliptical shape. In contrast with spirals, elliptical galaxies do not have much ongoing star formation and are mostly formed from older, redder stars. The bulge at the center of these galaxies is a supermassive black hole that disrupts the star formation process by heating or ejecting the gas that would normally form the core of a new star. Almost all massive galaxies — defined as larger than 100 billion times the size of our sun — are red ellipticals, since they are the result of many previous mergers.

The original uploader was Cosmo0 at English Wikipedia. Public domain, via Wikimedia Commons

Perplexingly, a new study of the massive galaxy SDSS J022248.95-045914.9 disrupts that paradigm. Astronomers expected this massive red galaxy to be elliptical because star formation has been quenched. However, there is no supermassive black hole bulge in the center of the galaxy, and the brightness profile matches what’s normally seen in a spiral galaxy. Moreover, it’s located in a dark matter halo, a region that usually has lots of young stars. The authors looked for other massive galaxies with these characteristics in an existing database, but were unable to find anything.

So how did this strange, giant galaxy form, and why did it stop making new stars if there’s no supermassive black hole to interfere with the process? Astronomers need more data  to confirm the galaxy’s shape and composition and ultimately answer these questions, a perfect task for a revolutionary new survey telescope. But it’s also quite possible that the galaxy just … ran out of gas.

Just 15 percent of psychology studies are strongly rooted in theory

A study of over 2000 papers highlights the need for more theory in psychology research

Eagle Gamma

Journalism

Psychology is the science of the human mind and behavior. It evolved from the other sciences, like biology, physics, and sociology. Along the way, it has developed into one of the more socially relevant sciences, with its findings applying to government, business, relationships, sports, and many other topics. But a key challenge for the field is that the mind is less tangible than organisms, matter, or even people themselves. The field also has a replication crisis on its hands, in addition to the skepticism that many people already hold towards the notion of studying the mind scientifically.

But does psychology even have functional theory — a set of cumulative explanations for why something is the way it is? For all the work in psychology, including not only scientific investigations but also applications of psychology to medicine and the law, much psychological research may lack a theoretical foundation. Thinkers going back at least as far back as Thomas Kuhn last century have argued that psychology may not have a unifying rationale, and the theories psychologists do use seem to be poorly defined.

In a paper published in PLOS ONE, a group of scientists studied over 2000 papers published in the leading journal Psychological Science, measuring how often they referred to theory. They found that many studies actually do not use theory, and those studies that do tend to refer to them only once. Just 15.3 percent of the papers specifically aimed to test predictions from theories. 

This means that even a prominent psychology journal dedicated to theory-based science only rarely tests theory. The finding may explain the lack of reliability in psychology, and signals a potential failing in the fundamental scientific enterprise of psychology. Can psychology develop a sound theoretical framework? Perhaps this new focus on the matter will push the field toward that goal.

Pets harbor different microbes from their wild relatives

Swapping the diets of wild and domesticated animals could only explain some of the microbial differences

Bhargavi Murthy

Neuroscience

Max Planck Institute of Psychiatry

The diversity of the gut microbiome impacts the health and nutrition of humans and (some) animals. While diet exerts the strongest influence on microbiome diversity, ecological factors like industrialization (for humans) and domestication (for animals) play a role, too.

Domestication can cause differences in mammalian gut microbial signatures, and this has been surveyed in a few mammals in independent studies. However, a recent study attempted to examine these differences across nine wild versus domesticated pairs of mammals, using pets, livestock, and laboratory animals. 

The researchers collected fecal samples of wild and domesticated mammals to extract the genetic information about their gut microbial communities. Though they did not see a single hallmark of domestication, they observed a common shift in the microbial composition of all domesticated animals. When they exchanged the diets of wild-caught and lab mice, they found that the dietary swap could only partially erase the microbiome differences between the groups. 

But the researchers were curious to know if the microbiota of lab mice could be artificially "re-wilded." So, they transplanted lab mice with poop from wild mice and also kept them on a wild diet. They saw that just this one fecal transplantation successfully shifted the microbiota of lab mice toward the wild donor profile, even without much help from the wild diet. 

The researchers also tested the swap diet on wolves versus dogs. Interestingly, while wolves gained microbial diversity on dog food, dogs lost diversity when fed with carcasses. Finally, the researchers compared wild chimpanzees to humans to draw parallels between domestication and industrialization. The profile of non-industrialized humans was more similar to chimpanzees than that of their industrialized counterparts, who were, in fact, similar to their pets.

Domestication has, therefore, left significant effects on the microbial gut community across multiple species of mammals. Examining these differences is essential to understanding host-microbiome-environment relationships, especially in the context of zoonotic diseases and antibiotic-resistant bacteria being harbored in domesticated animals.

Mind-controlling your computer just became one step closer

Two people were able to perform tasks like opening Skype with a wireless implant

Simon Spichak

Neuroscience

Imagine if we could control electronics with our minds. These brain-computer interfaces would help people with paralysis regain some independence. It could also help people who are conscious but unable to move or vocally communicate. However, it is difficult to build a working wireless interface. Too much hardware and too many wires make it easier for something to go wrong, and therefore harder to use at home.

Scientists at Brown University may have cracked the code with a device called BrainGate. Two people with paralysis participated in this pilot study, published last month. Surgeons implanted the participants with a device to measure electric signals in a small population of neurons. The device detected, recorded, and transmitted signals. Researchers tested how well participants could direct a cursor on a tablet computer. 

First, they tried the standard approach where the device used wires to transmit information from the brain to the computer. They completed a grid task, where they need to move a mouse cursor to a specific spot on the screen.

Each participant then attempted the same task without wires. Instead, researchers implemented a low-power radio transmitter. This transmitter sent signals to a receiver connected with the computer. When using the wireless transmitter, the participants' performances on the computer task remained the same. They used the device at home, showing that it works for everyday computer browsing. For example, they could use apps like Skype, thanks to the wireless brain-computer interface. 

This study is promising because it shows that wireless brain-computer interfaces work without forcing regular people to deal with a tangle of wires. It makes using these devices at home, outside of a lab setting, more feasible. Importantly, it helps people regain their independence through computer use and augments communication.

Foam rolling can help athletes hurdle the post-halftime slump

A new study looks at how this common self-massage technique affects athletic performances

Margaux Lopez

Astronomy

Vera C. Rubin Observatory

Foam rolling is a self-massage technique that has become increasingly popular among athletes in the past decade. Proponents claim that it increases performance and flexibility and speeds up muscle recovery, and foam rolling is often used both as a warm-up and as a recovery method. However, thus far there is no real scientific consensus on which kinds of athletes actually benefit from using foam rollers, or when to use them for the best outcomes.

While they are normally only used before or after an activity, a recent study aimed to determine whether foam rolling during the halftime of a soccer match could mitigate the significant performance drop that tends to appear at the beginning of the second half, often attributed to a decrease in muscle temperature. In the study, participants performed a series of sprints to simulate the first half of a match. During a fifteen minute "halftime" period, some of the participants used the foam roller to massage their leg muscles, while the others rested passively. Then the participants repeated the same series of sprints, and the researchers compared their performances in this second half to those from the first half. 

As expected, all players performed slightly worse during the second half due to fatigue, but the researchers found that the foam roller group had a smaller decrease in sprint performance than the group that had simply rested. This suggests that foam rolling positively impacts short-term muscle recovery and allows players to return to the field better prepared to sprint, a useful finding for any competitive sports team with multiple time intervals during a game. While active stretching or other warm-up activities may have produced the same result, foam rolling may be better since it requires less energy and ensures that players can listen to their coaches' instructions during halftime.

Why does COVID-19 often cause brain fog?

Low oxygen supplies in the brain make it difficult to think and carry out every day activities

Elizabeth Burnette

Neuroscience

University of California, Los Angeles

As more and more people get vaccinated against COVID-19, we begin see a light at the end of the tunnel. However, long-term side effects of the disease remain to be reckoned with. “Brain fog”, a set of long-term cognitive and memory impairments that persist even after recovering from the acute effects of COVID-19, is one such side effect reported by numerous people, even after milder cases.

Recent studies have indicated that the SARS-CoV-2 virus can affect the central nervous system, including the brain. One of the ways in which it is thought to act within the brain is by selectively targeting the mitochondria of neurons, compromising these cells’ metabolic rates by, essentially, “hijacking” the mitochondrial genome

Impairing mitochondrial metabolism leads to chronic hypoxia, a state of low oxygen supply. Brains are especially vulnerable to hypoxia because they require a large and constant supply of oxygen. In hypoxic conditions, oxygen-hungry neurons — such as those involved in important cognitive functions — become dysfunctional. This starts a vicious cycle: low-oxygen environments also promote viral replication and survival. As a person's viral infection worsens, their oxygen levels drop more, causing further cognitive impairment and confusion.

Deeper investigation into the long-term side effects of COVID-19 will be imperative, even after societal life eventually returns to normal. But studies examining the mechanisms behind COVID-associated brain fog provide a path toward treatment, hopefully alleviating symptom for people experiencing it.

Adam Fortais

Physics

McMaster University

Last month, the European Organization for Nuclear Research (CERN) announced preliminary evidence from the Large Hadron Collider beauty experiment that, if verified, would violate "lepton universality." Lepton universality is a principle that says three charged elementary particles — electrons, muons, and taus — can be treated identically, aside from their different masses. This result contradicts the Standard Model, which is physicists' best description of the universe's fundamental building blocks. 

If this finding is confirmed, it would be evidence that leptons cannot be treated equal, but it would still fall short of an observation. The difference is statistical: the term "evidence" implies researchers have identified a piece of information that would be contradicted once in 1000 experiments. In other words, they have a 0.001 chance of coming to the wrong conclusion. An "observation" is a higher bar. It implies something closer to one contradictory result once every three million experiments. 

This evidence strengthens the case for missing fundamental particles or interactions that do not exist within the Standard Model, but confirming the result wouldn't kill the Standard Model. On the contrary, when viewed through the lens of Thomas Kuhn's 1962 book, "The Structure of Scientific Revolutions," this result looks a little like finding grey hairs on your pillow. It's either a sign of a mature scientific theory, or of one under some extra stress.  

According to Kuhn, there are four broad stages of research within a field: Normal science, extraordinary research, adoption of a new paradigm, and aftermath of a scientific revolution. In some ways, normal science is akin to stamp collecting — new results are consistent within the framework of a field and serves to fill knowledge gaps. This stage is generally where research is applied to new technologies. But as a field ages, more and more results come out that don't fit into the currently accepted framework. This leads some members of the field to engage in "extraordinary" research.

Extraordinary research ends one of two ways — finding a way to make anomalous discoveries fit the existing framework or an upheaval of the field. In the case of the results from CERN, it is still possible that with additional experiments the lepton violation is found to be a statistical anomaly. The more exciting endpoint of extraordinary research, though, is what Kuhn termed a paradigm shift, which is often used to describe events like the transition to a heliocentric model of the solar system or the advent of quantum mechanics.

How many incompatible results a paradigm can withstand varies from field to field, and the Standard Model is a particularly resilient one. In fact, many critics of the Standard Model cite this as a weakness of the model. For some, there are a few too many moveable pieces and details of the model that can be shifted to account for new results. On the other hand, the Standard Model has been very successful at predicting a range of strange, exotic, or elusive results that ended up being borne out experimentally. So how wrong could it really be?

Of course, this is probably what every generation of scientists believe about the era they're brought up in. And this is what makes the preliminary results announced by CERN so exciting. Maybe with a few more experiments, these results will be found to be nothing more than a fluke. But maybe, just maybe, this is one more step towards a paradigm shift in particle physics. 

Hallucinating mice could help us understand schizophrenia

Neuroscientists made mice hear things that didn't happen for a rare look into what creates hallucinations in our brains

Thiago Arzua

Neuroscience

Medical College of Wisconsin

Sensing things that are not there, whether it is sounds, smells, or visions can be a disorienting and scary experience. These hallucinations can occur as part of a complex condition, such as schizophrenia, or as one-time events. But hallucinations are notoriously difficult to study, partly because of how hard it is to recreate and study them in lab animals. To address that, a group of researchers from Cold Spring Harbor Laboratory and the Washington University School of Medicine developed a way of invoking hallucinations in mice, and correlating their observations to humans. The researchers published their results earlier this month in the journal Science.

The team first trained mice to poke a button. If the mice could hear a certain sound played after pressing the button, they got a reward. The researchers found that about 16 percent of the mice would poke the button and confidently "hear" the sound, even no sound played. If that confidence matched the confidence of a real poke, they defined it as a hallucination. Those hallucinations increased with exposure to drugs such as ketamine. In humans, the researchers employed a similar task, asking online participants to press a key if they heard a sound, and report their confidence that they had heard it. 

Based on that, the team was then able to investigate more deeply into how exactly these hallucinations were happening with a computational model that simulated aspects of the trials, and found that hallucinations happened when the expectations were greater than the sensory inputs. They then confirmed this in mice, showing that both the expectation of a reward and the expectation of hearing something changed levels of dopamine in key areas of the brain.

This new approach to cause mice to have hallucinations opens doors to better understand hallucinations in different diseases. It may also inspire new treatments for silencing auditory hallucinations. 

An array of bacteria and fungi help leafcutter ants break down toxic leaves

The ants and the microbes work as a team to digest rainforest vegetation

Sophia Friesen

Developmental Biology

University of California, Berkeley

The diverse vegetation of neotropical rainforests is full of a wide array of toxic and repellant chemicals. In this forest of poisons, one group of herbivores reigns supreme: leafcutter ants, which account for 25 percent of all herbivory in neotropical rainforests, including plants that contain pesticides. They accomplish this remarkable feat by outsourcing plant digestion to "gardens" of subterranean fungus. The fungus grows on the leaves, the ants eat the fungus, and everyone is happy.

But researchers at UW Madison and Universidad de Costa Rica recently found that the compounds in harvested plants are toxic to the fungus, too, which lacks the necessary genes to break down many common plant toxins. How can leafcutter colonies thrive with a constant influx of poisonous leaves?

The answer may lie in a third player in this cooperative system: a diverse bacterial community associated with leafcutter fungus. Scientists had known for years that microbes in fungus gardens benefit the colony by synthesizing otherwise-rare nutrients. But these researchers suspected that they might also help break down plant toxins that neither the ants nor the fungus could handle.

In the genomes of dozens of fungus garden bacterial strains, the researchers found many of the genes required to break down plant toxins  – and an analysis of “metagenomes”, containing all bacterial DNA from fungus gardens, uncovered even more degradation pathway genes. This suggests that toxin breakdown may be a task shared between multiple strains of bacteria. 

To directly test garden residents’ abilities to degrade plant toxins, the researchers exposed bacteria, fungus, or intact garden communities to two plant compounds that reduce fungal growth and repel insects. They then measured the levels of each compound over time. Intact mini-gardens could degrade both compounds, and so could some varieties of bacteria, but fungus alone couldn’t.

While many plant compounds remain incompletely explored, this research suggests that the incredibly broad foraging ability of the ants is enabled by cooperation with an equally diverse set of fungi and bacteria. Together, the community can deal with a set of poisons no single organism could handle alone.

Can monitoring indoor carbon dioxide levels predict how COVID-safe a room is?

The gas can be a proxy for stagnant air — but the answer is complicated

Krystal Vasquez

Atmospheric Chemistry

California Institute of Technology

In 1875, scientists proposed that carbon dioxide (CO2) concentrations could help determine how ventilated an indoor space is. The more CO2 present, this theory went, the more stagnant the air. Now, nearly a century and a half later, this idea has resurfaced during the COVID-19 pandemic. Mounting evidence suggests that transmission occurs via aerosols that — without proper ventilation — can linger in the air for minutes to hours.

This then raises the question: as we begin to reopen schools and businesses, could we measure CO2 to determine how safe an indoor space is? 

In a paper published in the journal Environmental Science and Technology Letters, scientists from the University of Colorado Boulder, Zhe Peng and Jose Jimenez, studied this question. It turns out, linking ventilation risks to chemistry is more complicated than just ensuring CO2 levels don’t cross a pre-established threshold. Using no more than a couple of low-cost CO2 monitors and a mathematical model, Peng and Jimenez determined that risk depends on a number of factors, such as the number of occupants in a room, the percentage of occupants wearing masks, and the likelihood they’re participating in activities known to more easily spread the virus, such as talking or singing. So there is no single "safe" CO2 range for COVID-19 transmission.

But even though there are a lot of variables that need to be taken into account, this doesn't mean that measuring CO2 isn’t helpful. Public health officials can easily estimate the likelihood of transmission for a number of shared indoor spaces, like classrooms and grocery stores, using the COVID-19 aerosol transmission calculator, which is based on the mathematical formulas Peng and Jimenez share in their paper. And monitoring CO2 levels can still indicate someone’s relative risk of infection: all else equal, if better ventilation halves CO2 levels, then the transmission risks go down too, they say.

Their results support that better ventilation can reduce the amount of COVID-filled aerosols in the air. As Jimenez pointed out in a recent press release, "Wherever you are sharing air, the lower the CO2, the lower risk of infection."

NASA's Ingenuity helicopter achieves the first flight on another planet

30 seconds of flight on Mars is a universe-wide first

Briley Lewis

Astronomy and Astrophysics

University of California, Los Angeles

Only about a century after the first powered flight on Earth, we’ve already managed to fly the first powered aircraft on another planet. 

Early this morning (mid-day Martian time), NASA’s Ingenuity helicopter successfully completed its first flight on Mars. The Perseverance rover, which carried the helicopter to its launch site, watched from afar on the Van Zyl Overlook and captured photos of the historic moment. Its first test flight today lasted 30 seconds, with the helicopter hovering about 10 feet off the ground. (For comparison, the Wright brothers’ first flight was only 12 seconds, and they were testing on a much more hospitable planet.)

Although this still might seem like a short flight, it’s truly incredible given how much more difficult it is to fly in Martian air. In order to fly a helicopter or an airplane, we need lift — a force moving the aircraft upwards. The problem with Mars is that with its thinner atmosphere, we need a lot more power to move enough air and generate the necessary lift. For this reason, Ingenuity is very light (less than four pounds), so it needs less force to get it off the ground. Its blades are also capable of spinning at over 2,500 revolutions per minute (a whopping 5 times faster than helicopter blades on Earth have to spin).

Ingenuity helicopter on the ground vs. in the air as seen by the Perseverance rover

Ingenuity helicopter on the ground vs. in the air as seen by the Perseverance rover

NASA Perseverance/Ingenuity

Ingenuity is a “technology demonstration” — a project meant to prove we can do something, paving the way for future science missions. It tagged along with the Perseverance rover, which will leave the helicopter behind when the test flights are over. Since Ingenuity wasn’t the main mission, it was allowed to be more risky — no matter what happened, it would teach engineers valuable lessons about what to try next time. Before Perseverance resumes its primary science mission, the NASA team will try more and more ambitious flights, up to 90 seconds and 980 feet. As MiMi Aung, Ingenuity’s Project Manager, excitedly said today, “We really want to push the rotor craft flights to the limit!” 

NASA Ingenuity team celebrating the helicopter's successful flight

NASA Ingenuity team celebrating the helicopter's successful flight

NASA Perseverance/Ingenuity

Despite some delays leading up to today’s momentous flight, NASA scientists report the helicopter is healthy, working, and ready for more. There’s definitely a lot to look forward to in the coming weeks as Ingenuity paves the way for a new kind of spaceflight!

Researchers discover a new high-altitude Himalayan lake — and its waters are red

The reddish hue of the lake is likely due to iron-rich minerals in the area

Sree Rama Chaitanya

Molecular Biology

The Centre for DNA Fingerprinting and Diagnostics

Lakes come in all different sizes, shapes, and even colors: blue, pink, green, and brown. 

Now, scientists from the Indian Space Research Organization (ISRO) have reported a reddish-brown colored glacial lake in a remote region in the Northeast Himalayas. Scientists discovered this reddish lake using satellite images taken over the past 20 years. 

This high-altitude lake is situated 5060 meters above sea level, near the Zanskar valley, Ladakh in the Himalayan range. It is 11 kilometers (~7 miles) from the nearest village, and covers an area of approximately 0.2 square kilometers — about the size of New York City's Grand Central Station. The lake is being fed by a northeasterly glacier, and the researchers suggest that it has either been formed or expanded due to glacial melt caused by climate change. 

The team predicts that the reddish hue of the lake is caused by the dissolution or mixing of iron-rich minerals such as hematite and goethite in the area; a reaction likely catalyzed by chlorine levels in the water or microbial weathering of sub-glacial bedrock. Because of the unique geochemistry in the area, the molecules in the lake are reflecting light at longer wavelengths, giving the lake the reddish color. 

The researchers hope to conduct a field study of the lake in the coming summer to better understand the ecology and glacial geochemistry. 

New cell biology tool brings protein studies out of the dark

LightR uses blue light to switch proteins on and off, allowing researchers to look at their functions in cells

Stephanie Batalis

Biochemistry

Wake Forest School of Medicine

Cell biologists have the difficult job of singling out the impact an individual protein within a cell packed with thousands of proteins. One way to distinguish the function of a protein is to turn it on or off and study the effect it has on a cell. For decades, researchers have been assembling a toolbox of techniques to do just that. But so far, these tools have been limited in their ability to reversibly turn proteins on and back off again.

A recent approach published in the journal eLife uses blue light to give researchers greater control over when and where a protein is activated. The new tool, which the researchers have dubbed "LightR", is shaped like a clamshell made of parts of two photoreceptor proteins from the fungus  Neurospora crassa, a red bread mold. 

The researchers engineered the LightR clamshell into the active site of Src, the protein they were interested in switching on and off. In the dark, the clamshell is “open” and pulls the Src active site into an inactive state. The introduction of blue light causes the clamshell to snap closed, restoring the original shape of the Src active site and turning on the protein. When they tested this system in cells, they were able to turn on Src activity with blue light and then turn the protein activity back off by putting the cells in the dark.

With careful protein engineering, researchers around the world could add LightR to their toolboxes. The ability to control when a protein is turned on and off brings protein studies out of the dark and into the light.

Scientists find the first known nursery for the Munk's devil ray

By tracking rays across the eastern Pacific, researchers spied the growth and development of this less common ray species

María Rodríguez-Salinas

Marine Biology and Animal Physiology

Munk's devil rays (Mobula munkiana) are endemic to the Eastern Pacific, especially around the Gulf of California in Mexico. Despite their common sightings, much of their behavior remains a mystery for scientists. Being in a vulnerable conservation state, finding out more about them and their reproductive habits, could prove crucial for their protection.

These mobulas are renowned for their social behavior, gathering in big groups normally by size. During the past years, local fishermen and tourism operators had observed consistent aggregations of these rays in Ensenada Grande, a shallow bay in the Espiritu Santo Archipelago in the Gulf of California. A research team focused on this area, aiming to find out if a particular age group was predominant in these gatherings.

The team captured 95 rays in the bay during different periods, measured and classified them by their physical features as neonates, juveniles, or adults. Using conventional tagging and acoustic telemetry, they sampled their location and followed their movements for 22 months. The captured rays were mainly juveniles and neonates, and their tracking confirmed that the individuals stayed in Ensenada Grande for several months.

To find out if the juveniles and neonates appeared at the same time every year in this location, the scientists used professional photographs taken in recent years, which confirmed this hypothesis. These data lead to the conclusion that young mobulas use this shallow and protected bay as a nursery, where they grow until becoming less vulnerable.

Thanks to this study, the first nursery area for the Mobula munkiana has been discovered. Being mobulas under threat by targeted or accidental fishing, this finding could be vital for their conservation. By protecting this area, we could make sure their rare offspring has better chances of survival.

What is a comet? Your answer may depend on the language you learn in

Some astronomical terms get lost in translation, depending on the language

Ludving Adolfo Cano Fernandez

Physics

Universidad Mayor de San Andrés

Astronomy, one of the oldest sciences, is awe-inspiring. Studying astronomy can be as simple as peering out at the universe through a telescope, watching a meteor shower, or simply looking out your window at night at the stars. But, can we really define the concept of the universe? If we ask an adult that question, we'll probably get a mostly correct answer. But when we ask the same question to children, the answer might surprise us, depending on what language they speak.

In a study published in Participatory Educational Research, more than 100 students from 7th grade (11 to 13 years old) were asked about their concepts of the universe, the Sun, comets, and constellations. They were given four options, only one of which was scientifically correct. The other ones represented misconceptions in astronomy, which children are usually exposed to through everyday experiences such as incorrect concept formation in school, unscientific use of language, and even changes in meaning when words are translated between languages. These experiences can misinform students and that can affect their understandings of astronomy.

For example, when the children were asked to define a comet, these were the options presented:

  1. Comets are fast-moving stars
  2. Comets are non-moving stars
  3. Comets are not stars, they reflect the light of the sun and are made of frozen ice, dust, and gas
  4. A comet is a falling star, as the stars fall down, they appear to have a tail, and people make wishes when they see them

Only the third option was correct. Nonetheless, just 20 percent of all participants could answer and explain their option correctly. The researchers used these findings to discuss how the languages that children speak could influence their understanding of the concept: In Turkish, the term “comet” is translated as “tailed star”. Similarly, in Spanish, the term meteor shower is translated to “rain of stars”, when in reality meteor showers do not involve stars at all. This study suggests that a revision of the concepts given to children is necessary, especially when the language itself can bring misconceptions. 

More Lab Notes →