This is something I’ve been thinking about doing for a while now. The idea is that I’ll keep an eye out for recent studies or other science news and then post a monthly summary of the most interesting stories. Usually, the goal will be to post these at the beginning of the month because they’ll be about news from the previous month. But it took me a while to get around to doing this first one. I’m sticking to April 2019 news even though we’re already more than halfway through May.

I’ll be honest here; my real reason for starting this new blog post series is to look up random stuff online and call it “research” instead of “wasting time reading random articles”. However, it would also be great if someone else out there stumbles across my blog and learns something that they might not have otherwise seen. I should probably include a bit of a disclaimer acknowledging that I don’t have a background in science beyond a couple general education classes in undergrad. Also, I’m only including things that I personally find interesting, which means that the content will be skewed towards some topics more than others. For example, there’s going to be a lot more about neuropsychology than about ecology. This particular month, it’s a lot about food and brains. I encourage reader interaction; if you’ve come across some interesting science news that I haven’t included here, please leave a comment!

Black HoleThe biggest science news of April 2019 is a picture of a fuzzy orange circle against a black background. It’s been around for more than a month now, so the hype has faded a little, but you probably were seeing it all over the internet a few weeks ago, usually accompanied by the name and headshot of Katie Bouman, the 29-year-old computer scientist who played a key role in taking this picture. (In fact, this image is the result of many people’s efforts over the course of many years) But as much hype as that fuzzy orange circle is getting, it’s all well-deserved, because it’s a real photograph of a mysterious and fascinating phenomenon that we know and love from science fiction. We Earth people now have seen a black hole. 

A black hole, which is presumably caused by the collapse of a supermassive star, is an area of space with such a strong gravitational force that even light cannot escape from it. It’s actually the opposite of a hole; rather than being empty space, it’s an area of extremely condensed mass. The existence of such phenomena was suggested as early as 1783 by John Michell, a British academic whose writings and professions cover an impressive array of disciplines. (His various roles and titles at Cambridge alone include arithmetic, theology, geometry, Greek, Hebrew, philosophy, and geology; he was also a rector in the Anglican church and a relatively prolific writer of scientific essays) The idea of black holes didn’t get much attention until Albert Einstein’s general theory of relativity came along in 1915, describing how matter can bend spacetime and suggesting that such a thing as a black hole could exist. However, even then, the term “black hole” wasn’t in common usage until around 1964, and black holes basically belonged in the realm of theoretical physics and science fiction at that point and for a while afterwards.

If you look up timelines showing the advances in our knowledge of black holes, there are plenty of landmarks over the course of the last four or five decades, and some of these developments have resulted in images that show the effects of a black hole in some way, shape, or form. But the picture produced by the Event Horizon Telescope on April 10 of this year is the first actual photograph to directly show a black hole. The Event Horizon Telescope is actually several (currently eight) telescopes around the world, synchronized by the most advanced technology that computer science has to offer.

In other astronomy news, planetary scientists say that we’ll have a once-in-a-millennium encounter with an asteroid in about ten years, and it’ll happen on Friday the 13th. (April 13, 2029, to be exact) We’re told there’s no cause for concern; it won’t hit the Earth. This asteroid is just a fun fact, so let’s move on to the extremely important topic of food.

Greek saladPeople have been noticing for a while that the “Mediterranean diet” seems to be healthier than the “Western diet”. Although there are some organizations out there that have put forth very specific definitions of what constitutes the “Mediterranean diet,” the basic gist is that American food includes more animal-based fats, whereas the cuisine in places like Greece and southern Italy has more plant-based fats, especially olive oil. Proponents of the Mediterranean diet often point out the significance of cultural factors beyond nutrition. Our eating habits in America tend to prioritize convenience over socialization, while the idyllic Mediterranean meal is home-cooked, shared with family and friends, eaten at a leisurely pace, and most likely enjoyed with a glass or two of wine. I mention this because it has been suggested that a person’s enjoyment of their food actually impacts the way in which their body processes the nutrients. In other words, food is healthier when you enjoy it.

In this particular study, that factor wasn’t taken into consideration and probably didn’t play a role. (The socialization aspect and the wine consumption weren’t mentioned) But researchers did establish that monkeys who were fed a Mediterranean diet consumed fewer calories and maintained a lower body weight than monkeys who were fed an American diet, despite the fact that both groups were allowed to eat whatever amount they wanted. The implication is that the Mediterranean diet, and perhaps plant-based fat specifically, is the key to avoiding overeating. 

On another nutrition-related topic, it turns out that protein shakes aren’t as great as many people think. While it’s true and well-established that a high-protein, low-carb diet is effective for building muscle mass, there are drawbacks. Of course, general common knowledge has long dictated that a varied and balanced diet is good, but it turns out that too much reliance on supplements can actually be dangerous in the long run. Essentially, protein supplements can negatively impact mood, lead to a tendency to overeat, and eventually cause weight gain and decrease lifespan. Even if you’re a bodybuilder, you’re better off getting your protein from regular food than from protein drinks and protein bars. 

CoffeeLet’s move on from foods to beverages. Scientists have suggested that taste preferences could be genetic, and some kind-of-recent studies have backed up that theory. But this recent study from Northwestern University, which focused on a few specific beverages classified by taste category, didn’t reveal many genetic correlations. Instead, it appears that drink preferences are based more on psychological factors. In other words, I don’t love coffee because I love the taste of bitterness; I love coffee because I love the caffeine boost. Another study suggests that I don’t even need to taste my coffee to benefit from it. The psychological association between coffee and alertness means that our minds can be “primed” (that is, influenced) by coffee-related cues, like seeing a picture of coffee. In this study from the University of Toronto, participants’ time perception and clarity of thought was affected just by being reminded of coffee. (You’re welcome for the coffee picture I’ve included here)

I’ve come across a couple other interesting brain-related articles. For one thing, there have been recent developments in the understanding of dementia. We’ve known for a while that Alzheimer’s disease is correlated with (and probably caused by) the accumulation of certain proteins in the brain, but researchers have now identified a different type of dementia caused by the buildup of different proteins. In the short term, this isn’t a life-changing scientific development; this newly acknowledged disorder (called LATE, which stands for Limbic-Predominant Age-related TDP-43 Encephalopathy) has the same symptoms as Alzheimer’s and doctors can only tell the difference after the patient’s death. But in the long run, this new information may help doctors forestall and/or treat dementia.

Meanwhile, researchers are working on developing the technology to translate brain activity into audible speech. The idea is that this will give non-verbal people a way to speak that is a lot more user-friendly and authentic than what exists now.

In other neurological news, the long-standing debate about neurogenesis seems to have found its answer. The question is whether our brains continue to make new brain cells throughout our lives. Some neurologists argue that, once we’re adults, we’re stuck with what we’ve got. In the past, researchers have looked at post-mortem brains and seen little or no evidence to indicate that any of the brain cells were new. But in this study, the researchers made sure that their brain specimens were all from people who had only recently died. This time, there were lots of brain cells that appeared to be brand new. The brains without these new cells were situations in which the deceased person had Alzheimer’s; evidently, dementia and neurogenesis don’t go together. (The question is whether dementia stops neurogenesis or whether dementia is caused by a lack of neurogenesis. Or perhaps neither directly causes the other and there are factors yet to be discovered.)

In somewhat less groundbreaking neurology news, one study from the University of Colorado has shown yet again that extra sleep on the weekend doesn’t make up for sleep deprivation during the week. (This article makes it sound like a new discovery, but medical science has been saying this for a while.) 

Name one thing in this imageAll of this neuroscience stuff makes me think of a picture I saw that was originally posted on Twitter a few weeks ago. Apparently, it attracted a good deal of attention because a lot of people found it creepy. The picture looks like a pile of random stuff, but none of the individual objects are recognizable. Psychologically, that’s just weird. It brings to mind the intriguing psychological phenomenon known as the uncanny valley.

Uncanny ValleyThe uncanny valley refers to the creepy feeling that people get from something non-human that seems very humanlike. For example, robots with realistic faces and voices are very unsettling. If you don’t know what I mean, look up Erica from the Intelligent Robotics Laboratory at Osaka University. Or just Google “uncanny valley” and you’ll easily find plenty of examples. Although the concept of the uncanny valley generally refers to humanoid robots, the same thing is true of other things, like realistic dolls or shadows that seem human-shaped. It’s why most people actually find clowns more creepy than funny, and it’s one of several reasons that it’s disturbing to see a dead body. The term “uncanny valley” refers to the shape of a graph that estimates the relationship between something’s human likeness and the degree to which it feels unsettling. Up to a certain point, things like robots or dolls are more appealing if they’re more human-like, but then there’s a steep “valley” in the graph where the thing in question is very human-like and very unappealing. This tweeted picture of non-things isn’t quite the same thing because it doesn’t involve human likeness. But there’s still something intrinsically unsettling about an image that looks more realistic at a glance than it does when you look more closely.

So what’s the deal with this picture? It was probably created by an AI (artificial intelligence) computer program designed to compile a composite image based on images from all over the internet. Essentially, the computer program understands what a “pile of random stuff” should look like, but doesn’t know quite enough to recreate accurate images of specific items. This is an interesting demonstration of the current capabilities and limitations of AI technology. Essentially, AI programs mimic a real brain’s ability to learn. These programs can glean information from outside sources (like Google images) or from interactions with people. They then use this information to do things like create composite images, design simulations, and to mimic human conversation, whether text-based or audio.

Only relatively recently have AI programs been in common usage, but this technology now plays a large role in our everyday lives. Obviously, there are devices like Siri and Alexa that are capable of actual human conversation, and self-driving cars are now becoming a reality. Technically, things as mundane as recommendations on Netflix or Amazon are examples of AI, and AI programs are used for simulations and analysis in areas such as marketing, finances, and even health care. Recently, medical researchers have found that AI is creepily accurate at predicting death. And science is continually coming up with ways to advance AI technology. (I admittedly don’t understand this article explaining that magnets can make AI more human-brain-like, but it’s interesting anyway) 

In the interest of time, I’m going to end this blog post here. It’s gotten very long even though I’ve cut out probably about half of the stories I’d intended to include. If all goes as planned, I’ll have another one of these to post in a couple weeks.

Advertisements