Science News from June 2019

Leave a comment

A few weeks ago, I finished my summary of May’s science news by saying that this month’s post would probably include stories about weather forecasts, pentaquarks, and the Mona Lisa. So those are the stories that I’ll use to begin my science news summary for this past month. 

First, here’s the link to an article about the updates in weather forecasting. There’s some question among meteorologists as to whether it’s really ready; it hasn’t yet been as accurate as they’d like. But even if it isn’t quite right just yet, we may be getting close to seeing increased accuracy in weather forecasting. 

And here’s the article about the structure of pentaquarks. Admittedly, I don’t understand the significance of pentaquarks. Yes, I know that they’re subatomic particles consisting of five quarks, and this article further explains that pentaquarks come in two pieces, one of which is called a baryon and is made of three quarks, and the other is a meson consisting of a quark and an antiquark. But what kinds of atoms and molecules have pentaquarks, and what properties distinguish them from atoms and molecules without pentaquarks? I’m not sure, so if you know, please leave a comment below. Here’s another fun physics tidbit: Diamonds may be the key to developing the technology to detect dark matter. 

mona lisa This study about the Mona Lisa wasn’t so much of a scientific experiment as an analysis of the facial expression in the painting. As such, it’s nothing really groundbreaking; that painting has been around for over five hundred years. In fact, the researchers’ conclusions sound very familiar to me and I think that other people have said similar things in the past. But it’s still an interesting read. This article describes how a group of researchers led by a neurologist from the University of Cincinnati took a close look at Mona Lisa’s famous smile and said that it is probably a non-genuine smile, as evidenced by its asymmetry. Even if this was a new revelation, it wouldn’t be a big surprise because it’s normal for people to fake-smile for pictures. But some people are speculating that Da Vinci was intentional in depicting an insincere smile and that there may be “cryptic messages” to glean from it. The article suggests that perhaps this was a self-portrait or maybe it “referred to a man or a dead woman”, although it doesn’t explain why a fake smile would be evidence of those theories. 

Speaking of cryptic messages, this seems like a good segue into the fascinating topics of dreaming and sleep. This is scientific subfield that I find especially interesting, and I felt that way even before coming to the realization that my own sleep is abnormal. (Yes, I’ve been to sleep specialists, and yes, they’ve confirmed that I’m weird, and no, they don’t know why that is or what to do about it.) This study conducted in Finland had somewhat disappointing results in that researchers were not able to detect participants’ dreams by monitoring brainwaves. That certainly seemed like a realistic goal. Sleep and dreaming are neurological processes, and in fact, brainwave patterns are the difference between different sleep stages. We’ve known for a long time that most dreaming happens in REM sleep, which is characterized by brain activity similar to wakefulness even though the rest of the body is at its deepest level of sleep. (REM stands for Rapid Eye Movement, which is another thing that differentiates REM sleep from nREM, or non-REM sleep.) But it’s not a direct correlation. The brain doesn’t necessarily always dream in REM sleep, and some dreaming does occur in nREM sleep. There’s still more to learn about why it works that way. 

 

Why We Sleep book Matthew Walker

Why We Sleep: Unlocking the Power of Sleep and Dreams by Matthew Walker, 2017.

I happen to be in the middle of reading a book about sleep, and as it so happens, Matthew Walker, the author of that book, is one of the researchers who conducted this study on sleep and Alzheimer’s disease. In fact, the chapter that I just read was about observations that are backed up by this study. The basic gist is that changes in sleep patterns appear to be an early indicator of Alzheimer’s disease. This news article is very brief and I can’t access the full text of the journal article itself, but based upon the abstract and the chapter in the book, it would seem that there’s still room for debate about cause and effect. Thanks to relatively recent medical advances, we now know that Alzheimer’s disease is caused by the buildup of certain proteins in the brain, but is that buildup caused by poor sleep quality and insufficient amounts of sleep? Or is it those protein buildups that impair sleep? According to Walker, (at least as of the 2017 publication date of the book) both are probably true. It’s a vicious cycle in which the symptoms of the disease are also what drives the progression of the disease.

 

In other sleep-related news, another study suggested that leaving the light on while sleeping may increase the risk of obesity in women. (The study evidently did not evaluate whether this applies to men) The article isn’t very specific about possible explanations for this, except a line about “hormones and other biological processes”. Since it’s fairly well-established now that inadequate sleep is linked to weight gain, I’m speculating that sleep quality is the significant factor here. It makes sense that artificial light can have an impact on the sleeping person’s sleep cycle, perhaps by making it take a little bit longer to get all the way into deep sleep, or affecting the proportion of REM to nREM sleep.

I’ve come across some other weight-related science stories. While this one doesn’t directly discuss sleep, it continues the theme of brain activity. (If you’re tired of my fascination with the human brain, here’s a fun story about bees’ cognition) Young children’s brains use a very large proportion of their energy intake. At some points in children’s development, their brains are using more than half of their calories, which is pretty amazing even before you stop to think about the fact that those children’s physical bodies are also growing and developing quickly. The article goes on to say that this energy expenditure could mean that education at the preschool level can stave off obesity. But I think that another important takeaway here is that it’s important for young children to be well-nourished. 

CoffeeMeanwhile, for adults, coffee could help fight obesity because of its effect on BAT, (Brown Adipose Tissue) otherwise known as brown fat. The article goes on to describe how brown fat differs from regular white fat and its role in metabolism. As you might guess, caffeine is the reason that coffee has this effect. Although lots of people have long claimed that black coffee combats weight gain, it appears that this study is groundbreaking in demonstrating how this works. (The part about caffeine speeding up metabolism has been common knowledge for a while, but that’s much more vague than the new information about brown fat’s role in the process.) 

Yet again this month, there’s some new information about autism. This study identified a part of the brain in which a lower density of neurons corresponds to certain traits and mannerisms associated with autism, specifically those involving rigid thinking. In that case, this information doesn’t add anything to our limited understanding of what causes autism, but another study indicates that one factor might be Propionic Acid, a preservative often found in processed foods. This article focuses on the effect of Propionic Acid on a developing fetus. I can’t tell from the article whether this information is specific to a certain prenatal phase of development or if it also applies to children after birth. 

In other health-related news, progress is being made in diagnosing Lyme disease, and we may even be approaching a cure for the common cold. For the particularly health-conscious, here are some other things to take into consideration in your everyday life. Certain microbes in the gut may have a positive impact on athletic ability, it’s a good idea for everyone to spend two hours a week out in nature, and antioxidants are actually bad for you. Okay, that’s an unfair oversimplification. But the point is that scientists are coming to the conclusion that antioxidants are not the anti-cancer solution that we’ve thought; they actually could increase the risk of lung cancer because they don’t just protect healthy cells from those harmful free radicals we hear so much about. They also protect cancer cells. Of course, that doesn’t mean that you should avoid foods high in antioxidants. But it does mean that it’s not a good idea to take dietary supplements that give you several times the needed amount of certain nutrients. It’s just one more reason that it’s healthier to rely on food for your nutrition. Incidentally, one such antioxidant is Vitamin C, which has been praised as one of the few nutrients that isn’t bad for you if you consume too much. I guess that’s not true after all. 

BCIRather than ending on that bleak note, I’ll wrap this up by pointing out an impressive technological feat.  Researchers from Carnegie Mellon University have developed a brain-controlled robotic arm that’s headline-worthy as the least-invasive BCI yet. BCI stands for Brain Computer Interface, and it’s basically what it sounds like. The technology actually exists for computers to respond directly to the human brain. I have no idea how it works. Here’s the article, although you should probably ignore the stock image of a computer cursor. If you want a relevant visual, this brief youtube video includes a few seconds showing the actual robotic arm. Sure, it’s not exactly cyborg technology as depicted in movies, but it’s still pretty incredible. 

Advertisements

Rambling about Millennials, Part One

1 Comment

blog picture Generation Me

Pictured: said book

I recently read a book from 2006 that commented that we hadn’t yet coined a term to label the age demographic that comes after “Baby Boomers” and “Generation X”. Although that book wasn’t very outdated otherwise, that one sentence is now inaccurate and actually kind of funny. At some point shortly after that book was published, the media fell in love with the word “millennial,” and for a while now, it’s been consistently used as the name of a certain demographic group. The millennial generation is roughly defined as those who were children at the change of the millennium, although some have specified that millennials are those born between 1982 and 2004. (That parameter evidently was first laid out by authors Neil Howe and William Strauss, whose theories are more speculative than empirical, but worth googling if you find yourself with a few spare minutes)

blog picture HelloAt any rate, since I was born in 1991, I’m definitely well within this range and am indubitably a millennial. As such, I have a lot I’d like to say on various subtopics of millennial-ness, some of it addressing generalizations and some of it describing my own theories that are also more speculative than empirical. In fact, I have too much millennial-themed potential content to stick it all into one blog post, so this is going to be a multi-part series. (At this point, I’m thinking it’ll be four parts) A logical starting place is the very concept of categorizing people into specific age demographics.

Personally, when I was a child, I was under the impression that humanity essentially fell into three groups: children, teenagers, and adults. Sometimes, it might be convenient to sort adults into the categories of parent-aged adults, grandparent-aged adults, and adults older than my own grandparents, but for the most part, I thought of “growing up” as a sort of finish line. Getting there might be a gradual process, but once you passed the line, you were done, and you were just as grown-up as any other grown-up. Of course, I found out long before turning eighteen that a person’s entire lifespan, and not just childhood, is a series of changes and landmarks. But it still came as a bit of a surprise when, well into my twenties, the society around me still didn’t consider me fully adult. To some extent, I think this is a current trend caused by social and economic factors; the age of financial independence has been pushed far past the age of legal adulthood or physical maturation. That’s something I intend to write much more about later. But this isn’t entirely a modern thing; it’s always been true that there are major distinctions between different age categories even within adulthood.

If we’re talking about biological aging or cognitive changes or the gradual accumulation of knowledge, I would imagine that aging has happened at the same rate for at least many centuries, if not for all of human history. But if we’re talking about intergenerational differences, I think that things have really sped up since the mid- to late- 1800s. For the last 150ish years, technology has developed so rapidly that each generation is growing up in a very different setting than the last one.

blog picture phonesTelephone history serves as an obvious example. After Alexander Graham Bell got his telephone patented, it took 46 years before a third of American households had telephones. At the time, that surely seemed like a major cultural shift. Communication was suddenly much faster and easier; the telephone changed the way we stay in touch with family and friends, seek help in emergencies, and interact with coworkers or customers. Yet 46 years seems like an awfully slow transition by today’s standards. Now, over three quarters of Americans own smartphones, just 23 years after the first one was invented, and it’s been a mere 10 years since iOS and the android operating system came into being. (The slightly-used iPhone 4 I bought in 2014 is so outdated that I’ve had strangers stop me to ooh and ah over my antique phone. I am not even kidding about that.) Similar statistics apply to various other appliances and devices.

But it’s not just about technology; along with those changes come shifts in every aspect of culture, from fashion and music to the prevalent philosophies and worldviews. The Renaissance period lasted for about three or four centuries, and the industrial revolution was several decades long, (anywhere from 60ish years to almost 200 years, depending upon what source you consult) but in recent history, we talk about decades rather than eras. I don’t think that’s a matter of nomenclature; I think that many of us genuinely think of the ‘80s or the ‘90s as bygone eras.

Long before I read the book that I mentioned at the beginning of this blog post, I was formulating an explanation of generational differences (especially in terms of political opinions) that was based on these types of changes. It’s more than just technology and popular culture that changes over time; it’s also the political environment and the economic state of affairs. For example, I was born just as the Soviet Union was breaking up and the Cold War was ending. Although there has obviously been international blog picture cold wartension and conflicts since then, (and one can certainly argue that some of it is linked to the events and attitudes of the Cold War era) the fact of the matter is that I grew up in a political environment very different from that of the previous few decades. The post-nuclear landscape was just a sci-fi setting rather than a plausible fear, “terrorism,” was a more common and frightening buzzword than “communism” and we didn’t talk about “mutually assured destruction” because we all knew that the USA is a superpower and that we had less to fear from actual war than from school shootings, suicide bombings, and the like. Even the terrorist attacks of 2001 and the more recent threats from ISIS are recognized as originating from fringe groups, not from entire nations. It’s a commonly accepted fact that people instinctively fear or dislike “the other”, but I’d posit that it’s a much weaker instinct for those of us who grew up in post-Cold-War America. Whether you see that as good or bad, whether you call it “tolerance” or dismiss it as extreme liberalism, I think it explains a good deal about intergenerational differences in political opinion.

My point here is that any explanation of “why millennials are so…” has to take into account the various factors that made the ‘90s and ‘00s different from, say, the ‘70s and ‘80s. I’m not going to pretend to have sufficient expertise in sociology, childhood development, politics, economics, etc., to make a comprehensive list of all such factors, but I can certainly suggest a few that I think are major ones. As I discussed in the paragraph above, the end of the Cold War makes a difference. Perhaps even more significantly, modern technology has greatly increased the speed of communication, and it’s also meaningful that the entertainment industry has made more rapid technological advances than other fields. While commercialization has been an issue for generations, advertising is just getting more insidious and subliminal all the time, subtly altering our collective priorities even as we become less and less trustful of mainstream media and of rich and powerful people. And the emphasis of self-esteem in parenting and education is a big deal too; in fact, it’s the main topic of the book I’ve mentioned a few times now. Sure, that trend originated in writings from around the turn of the century, but it picked up steam slowly, and my generation is probably the first to be indoctrinated into it enough to experience the drawbacks. Much more on that later.

Another biggie is the changing views on education. As higher education has gotten more and more common over time, it’s also become more and more necessary. We’ve reached a point where a college education is not only essential for success in most career paths, it’s also a social expectation for the entire middle class and those from wealthy families. But higher education has also gotten more expensive over the past few decades, and educational loans have become more common and much larger. For the last decade or two, it’s been considered normal to take out student loans by the thousands and tens of thousands. So that’s another thing that makes the millennial experience different than that of earlier generations: It’s now normal and supposedly inevitable for young people to enter adulthood with astronomical debt. blog picture student loansNo longer is debt something that happens to you if you hit hard times or make bad life choices; now it’s practically a coming-of-age landmark. And in general, it’s the people who rack up more debt who become recognized as high achievers and those who make decisions enabling them to avoid debt who are thought of as inferior, or at least less successful. It’s no wonder that young adults are more likely than older adults to believe that the government is responsible for our financial well-being. Socialism sure does sound nice when long-term debt is normal and when the “right” life choices are more expensive than the “wrong” ones.   

I’m not saying any of this to speak against or advocate for any particular political/economic stance. (For what it’s worth, I’m actually much more conservative than the average  or stereotypical person of my age demographic.) My point here is that “millennial” attitudes make sense in context. If I follow the vague outline I have for this blog-post-series, that concept of context is going to be really the central point of the whole thing. When you think about it, the only difference between generations is context. If you could somehow ignore the effects of cultural influences, technology, socio-economic circumstances, political environment, and social expectations, everything that’s left (basic personality traits, appreciation for things like nature or music, capacity for learning, etc.) might vary from person to person, but is pretty much constant from generation to generation.

A Campaign to Save CDs

Leave a comment

I’ll be the first to admit that, in terms of technology, I’m a little behind the times. I mean, I didn’t have a cell phone or a facebook account until 2009, I rarely text, and I’m a little unclear on the distinction between an iPod, an iPad, and an iPhone. I still think it’s pretty awesome that, not only do I have a phone that I can conveniently carry around with me, but that said phone can also be used as a clock, a stopwatch, a camera, and an alarm, and I can even use it as a light source. Never mind that it can’t connect to the internet and doesn’t have the games and apps that everyone else seems to have. I don’t feel that I have much of a use for that kind of technology; the existence of cell phones themselves still seems pretty cool to me. And don’t even get me started on the internet. The internet is like magic. Here I am, sitting alone in my dorm room and typing words that will shortly be visible on computer screens (and other internet-enabled technological devices) all over the world for anyone to see. If I was to log onto facebook right now, I would be able to simultaneously look at stuff posted by people who are in the same building as me and people who are in different countries.

So, despite my relatively old-fashioned use of technology, (if you can call it old-fashioned to use devices that are just a couple years out of date) I’m really not opposed to technological advances. At least, not in theory. I don’t approve of the fact that my computer has updates to install on a daily basis, and I really don’t like the fact that anything related to computers will become obsolete almost immediately after it comes into existence. And I really, really am annoyed by the fact that music and videos are constantly being revolutionized.

I knew that this semi-artsy picture that I took a couple years ago would someday be relevent for something.

I knew that this semi-artsy picture that I took a couple years ago would someday be relevent for something.

When I was little, (we’re talking early 1990s here) my family mainly used cassette tapes to listen to music and VHS tapes to watch videos. We also listened to vinyl records sometimes, although the record player lived in the basement, and we had several tape players in various parts of the house. It was Christmas 1999 when we gave in and started using CDs. After that, I think it was another four or five years before we got a DVD player. Now, there still are vinyl records, cassette tapes, and VHS tapes in our house, and the family still owns the technology to play them, but we have become accustomed to primarily using digital technology. Unfortunately, even though it’s only been a few years, digital technology is on the way out. Now, people listen to music on their various iThings, and DVDs are giving way to Blu-Ray. (To be honest, I don’t even know exactly what Blu-Ray is, even though I’m well aware that it’s actually been around for quite a while)Otherwise, people just use the internet to listen to music and watch videos.

 

This isn't necessarily related to this blog post in any way, but whatever.

This isn’t necessarily related to this blog post in any way, but whatever.

I don’t necessarily think that’s a terrible thing, but it bothers me to know that my fairly impressive CD collection is already nearly obsolete, and DVDs aren’t much better. Meanwhile, it’s hard to even find a cassette player anymore, so cassette tapes are almost worthless, even though they were still in standard use just a couple of decades ago. In the grand scheme of things, a couple decades is no big deal. I’m not very old; I shouldn’t have had to already watch a couple phases of “new” technology turn into obsolete technology. Just imagine all of the new forms of media storage and media playing that will come about throughout the course of my life! The state of my music collection will never be constant. I’ve already experienced the transition from cassette tape to CD and a partial transition from CD to MP3s. And I’m behind the times; practically everyone else in the world is moving beyond MP3s.

This is my request to the people of the world: please don’t let CDs become obsolete! I love my CDs. I think that MP3 players are a pretty neat idea, too, but I still prefer CDs. Once CDs finish going out of style and iDevices completely take over as the universal norm, something else will come along and iThingies will become obsolete, too. Then everyone will either have to lose significant portions of their music collection or spend lots of money to replace it all. The only way we can end this ongoing pattern of technological replacement is to decide that we like our technology the way it is, and we’re going to keep using it, even if electronics manufacturers tell us that we aren’t supposed to like the old way. (Again, I think it’s ironic that something like CDs can be called “old” when they were cutting-edge just a generation ago.)

Let’s keep technological advances in fields such as medical science, engineering, and physics, where they can do good and useful things that will benefit the human race as a whole. The entertainment industry is more developed and less important than any other field of scientific endeavors. If advances in entertainment technology slow down or even stop, it isn’t going to hurt anyone. I suggest that we all continue to use and enjoy the music-storage and music-playing devices that we have now and do our part to help them stick around for a while longer.

Long live the Compact Disc!