Science News from June 2019

Leave a comment

A few weeks ago, I finished my summary of May’s science news by saying that this month’s post would probably include stories about weather forecasts, pentaquarks, and the Mona Lisa. So those are the stories that I’ll use to begin my science news summary for this past month. 

First, here’s the link to an article about the updates in weather forecasting. There’s some question among meteorologists as to whether it’s really ready; it hasn’t yet been as accurate as they’d like. But even if it isn’t quite right just yet, we may be getting close to seeing increased accuracy in weather forecasting. 

And here’s the article about the structure of pentaquarks. Admittedly, I don’t understand the significance of pentaquarks. Yes, I know that they’re subatomic particles consisting of five quarks, and this article further explains that pentaquarks come in two pieces, one of which is called a baryon and is made of three quarks, and the other is a meson consisting of a quark and an antiquark. But what kinds of atoms and molecules have pentaquarks, and what properties distinguish them from atoms and molecules without pentaquarks? I’m not sure, so if you know, please leave a comment below. Here’s another fun physics tidbit: Diamonds may be the key to developing the technology to detect dark matter. 

mona lisa This study about the Mona Lisa wasn’t so much of a scientific experiment as an analysis of the facial expression in the painting. As such, it’s nothing really groundbreaking; that painting has been around for over five hundred years. In fact, the researchers’ conclusions sound very familiar to me and I think that other people have said similar things in the past. But it’s still an interesting read. This article describes how a group of researchers led by a neurologist from the University of Cincinnati took a close look at Mona Lisa’s famous smile and said that it is probably a non-genuine smile, as evidenced by its asymmetry. Even if this was a new revelation, it wouldn’t be a big surprise because it’s normal for people to fake-smile for pictures. But some people are speculating that Da Vinci was intentional in depicting an insincere smile and that there may be “cryptic messages” to glean from it. The article suggests that perhaps this was a self-portrait or maybe it “referred to a man or a dead woman”, although it doesn’t explain why a fake smile would be evidence of those theories. 

Speaking of cryptic messages, this seems like a good segue into the fascinating topics of dreaming and sleep. This is scientific subfield that I find especially interesting, and I felt that way even before coming to the realization that my own sleep is abnormal. (Yes, I’ve been to sleep specialists, and yes, they’ve confirmed that I’m weird, and no, they don’t know why that is or what to do about it.) This study conducted in Finland had somewhat disappointing results in that researchers were not able to detect participants’ dreams by monitoring brainwaves. That certainly seemed like a realistic goal. Sleep and dreaming are neurological processes, and in fact, brainwave patterns are the difference between different sleep stages. We’ve known for a long time that most dreaming happens in REM sleep, which is characterized by brain activity similar to wakefulness even though the rest of the body is at its deepest level of sleep. (REM stands for Rapid Eye Movement, which is another thing that differentiates REM sleep from nREM, or non-REM sleep.) But it’s not a direct correlation. The brain doesn’t necessarily always dream in REM sleep, and some dreaming does occur in nREM sleep. There’s still more to learn about why it works that way. 

 

Why We Sleep book Matthew Walker

Why We Sleep: Unlocking the Power of Sleep and Dreams by Matthew Walker, 2017.

I happen to be in the middle of reading a book about sleep, and as it so happens, Matthew Walker, the author of that book, is one of the researchers who conducted this study on sleep and Alzheimer’s disease. In fact, the chapter that I just read was about observations that are backed up by this study. The basic gist is that changes in sleep patterns appear to be an early indicator of Alzheimer’s disease. This news article is very brief and I can’t access the full text of the journal article itself, but based upon the abstract and the chapter in the book, it would seem that there’s still room for debate about cause and effect. Thanks to relatively recent medical advances, we now know that Alzheimer’s disease is caused by the buildup of certain proteins in the brain, but is that buildup caused by poor sleep quality and insufficient amounts of sleep? Or is it those protein buildups that impair sleep? According to Walker, (at least as of the 2017 publication date of the book) both are probably true. It’s a vicious cycle in which the symptoms of the disease are also what drives the progression of the disease.

 

In other sleep-related news, another study suggested that leaving the light on while sleeping may increase the risk of obesity in women. (The study evidently did not evaluate whether this applies to men) The article isn’t very specific about possible explanations for this, except a line about “hormones and other biological processes”. Since it’s fairly well-established now that inadequate sleep is linked to weight gain, I’m speculating that sleep quality is the significant factor here. It makes sense that artificial light can have an impact on the sleeping person’s sleep cycle, perhaps by making it take a little bit longer to get all the way into deep sleep, or affecting the proportion of REM to nREM sleep.

I’ve come across some other weight-related science stories. While this one doesn’t directly discuss sleep, it continues the theme of brain activity. (If you’re tired of my fascination with the human brain, here’s a fun story about bees’ cognition) Young children’s brains use a very large proportion of their energy intake. At some points in children’s development, their brains are using more than half of their calories, which is pretty amazing even before you stop to think about the fact that those children’s physical bodies are also growing and developing quickly. The article goes on to say that this energy expenditure could mean that education at the preschool level can stave off obesity. But I think that another important takeaway here is that it’s important for young children to be well-nourished. 

CoffeeMeanwhile, for adults, coffee could help fight obesity because of its effect on BAT, (Brown Adipose Tissue) otherwise known as brown fat. The article goes on to describe how brown fat differs from regular white fat and its role in metabolism. As you might guess, caffeine is the reason that coffee has this effect. Although lots of people have long claimed that black coffee combats weight gain, it appears that this study is groundbreaking in demonstrating how this works. (The part about caffeine speeding up metabolism has been common knowledge for a while, but that’s much more vague than the new information about brown fat’s role in the process.) 

Yet again this month, there’s some new information about autism. This study identified a part of the brain in which a lower density of neurons corresponds to certain traits and mannerisms associated with autism, specifically those involving rigid thinking. In that case, this information doesn’t add anything to our limited understanding of what causes autism, but another study indicates that one factor might be Propionic Acid, a preservative often found in processed foods. This article focuses on the effect of Propionic Acid on a developing fetus. I can’t tell from the article whether this information is specific to a certain prenatal phase of development or if it also applies to children after birth. 

In other health-related news, progress is being made in diagnosing Lyme disease, and we may even be approaching a cure for the common cold. For the particularly health-conscious, here are some other things to take into consideration in your everyday life. Certain microbes in the gut may have a positive impact on athletic ability, it’s a good idea for everyone to spend two hours a week out in nature, and antioxidants are actually bad for you. Okay, that’s an unfair oversimplification. But the point is that scientists are coming to the conclusion that antioxidants are not the anti-cancer solution that we’ve thought; they actually could increase the risk of lung cancer because they don’t just protect healthy cells from those harmful free radicals we hear so much about. They also protect cancer cells. Of course, that doesn’t mean that you should avoid foods high in antioxidants. But it does mean that it’s not a good idea to take dietary supplements that give you several times the needed amount of certain nutrients. It’s just one more reason that it’s healthier to rely on food for your nutrition. Incidentally, one such antioxidant is Vitamin C, which has been praised as one of the few nutrients that isn’t bad for you if you consume too much. I guess that’s not true after all. 

BCIRather than ending on that bleak note, I’ll wrap this up by pointing out an impressive technological feat.  Researchers from Carnegie Mellon University have developed a brain-controlled robotic arm that’s headline-worthy as the least-invasive BCI yet. BCI stands for Brain Computer Interface, and it’s basically what it sounds like. The technology actually exists for computers to respond directly to the human brain. I have no idea how it works. Here’s the article, although you should probably ignore the stock image of a computer cursor. If you want a relevant visual, this brief youtube video includes a few seconds showing the actual robotic arm. Sure, it’s not exactly cyborg technology as depicted in movies, but it’s still pretty incredible. 

Advertisements

Science News from May 2019

2 Comments

This is the second installment of what I hope will become a monthly series. I’ve been periodically checking a few different websites (mostly sciencedaily.com and sciencenews.org) and keeping an eye out for interesting science news stories. Although we’re already halfway through the month of June, this blog post only includes stories through the end of May. But I am in the process of collecting more recent content for next month!

My summary of April’s science news included a lot of studies about food and nutrition, so I’m going to start by following that up with this inconclusive study about whether highly processed foods cause weight gain. The article suggests a couple reasons for the lack of a clear yes/no answer. Nutritional needs and metabolism vary from person to person, and it’s still unclear to scientists what all of the variables are. Besides that, nutrition is hard to study because an accurate scientific study requires a controlled environment and detailed data collection, which means there’s a disconnect from real-life eating habits. This article mentions the possible effects of “social isolation, stress, boredom, and the fact that foods are prepared in a laboratory,” but that barely scratches the surface of the possible confounding variables. There’s also the possibility that participants’ eating habits, amount of exercise, or even their metabolism is affected simply by the knowledge that they’re part of a study on nutrition. Here’s another recent study that didn’t confirm a common nutrition “fact”. It would appear that dietary cholesterol doesn’t really cause strokes. The takeaway here is that nutrition is still a relatively new field of study and there’s a lot more to learn. (On a side note, though, apparently blueberries are good for blood pressure)

ChocolateMeanwhile, the University of Edinburgh has been asking the big questions and perfecting the chocolate-making process. And in Munich, they’re studying the scent of dark chocolate. They’ve identified 70 different chemicals whose odors combine to create the distinctive smell of dark chocolate, although only 28 to 30 are really detectable.  And as long as we’re talking about scents, another study showed that people who drink coffee are more sensitive to the smell of coffee. 

Another topic that played a big role in my blog post from last month was artificial intelligence. I have another update to add in that area, too. In the ongoing quest to make AI as similar to the human brain as possible, researchers have noticed that machines with an artificial neural network (as opposed to a conventional computer model, which rely entirely on algorithms and can only “think” sequentially) can have a human-like number sense.

dice-clipart-fiveIf you aren’t entirely sure what that means, let’s use the image on the right as an example. How many dots are there? You probably noticed that there are five dots as soon as you scrolled down far enough to see it, even before you read these words that tell you why this image is here. But you probably didn’t look closely at it and consciously think the words, “One, two, three, four, five.” As quick and easy as it is to count to five, it’s even quicker and easier to just visually recognize the pattern and know that it illustrates the number five. Your brain is capable of doing that without actually counting.  You’re also capable of looking at two different images with a cluster of dots and instinctively knowing which one has more without actually counting. (There’s some debate about whether that’s the exact same skill or just a related skill. My opinion is that it’s different, but there’s obviously a connection)

As I’ve tried to look up more information on visual number sense, I’ve increasingly realized that there are other debates on the topic as well. There’s a variety of questions and opinions about how it works, whether it varies from person to person, and whether it’s an inherent, innate skill or an or acquired skill. But based upon what we know about how people learn to read, and also based upon what this new AI story demonstrates, I think it’s pretty clear that this is an example of neuron specialization. You literally use different neurons to recognize the five-ness of this image than the neurons you would use to recognize a different number. Think of a child learning how to read; first he or she must learn to recognize each letter of the alphabet as a distinct symbol and understand that each one makes a different sound, but then he or she has to learn to do so very quickly in order to be able to comprehend the meaning of whole words and sentences. To become a proficient reader, the child must eventually learn to recognize whole words instantaneously. This learning process usually takes at least three or four years because it actually requires changes in the brain. Not only does it necessitate close cooperation between the neural networks used for vision and conceptual comprehension, it also requires specific neurons to specialize in identifying specific visual cues, such as the letter A or the word “the”.

I could ramble for a while longer about that (I am a children’s librarian, after all) but I’ll leave it at that because my point is just that it makes sense that number recognition works similarly. But it’s a lot easier. The concept of “five” is much more intuitive than the concept that a particular arrangement of squiggles corresponds to a particular grouping of sounds which in turn corresponds to a particular thing or idea. I’m not sure that AI would be capable of learning to read; a computer only comprehends text insofar that it’s been programmed to recognize certain letters, words, or commands. If a programmer makes a typo and leaves out a letter or punctuation mark, the computer doesn’t recognize the command. But based upon this new story about AI number sense, a computer with an artificial neural network can indeed use a process akin to neural specialization to develop human-like visual number recognition.

That might not seem like a scientific advancement, because after all, the one advantage that computers have over human brains is their ability to work with numbers almost instantaneously, whether that means counting or arithmetic or more advanced mathematics. But it’s certainly an interesting story because it validates the similarity between an artificial neural network and an actual human neural network. Also, it gives me an excuse to nerd out about neural specialization and literacy acquisition, which is the real point here.

ToddlerBut speaking of small children, a new study from Massachusetts General Hospital has found what countless other studies have also shown: Early childhood is a very formative phase of life. It has been common knowledge for a while now that personality traits, social skills, intelligence, and even academic potential are mostly determined by the age of five. This particular study was looking at the impact of adversity such as abuse or poverty and it evaluated this impact by looking at the biochemistry of epigenetics rather than behavior or self-reported psychological traits. (Epigenetics describes things that are caused by changes in gene expression rather than differences in the genes themselves. In other words, genetics determine what traits or disorders a person is predisposed to have and epigenetics determine whether that person actually develops those traits or disorders.) Data was gathered from a longitudinal (long-term) study that has been collecting data including both DNA methylation profiles and reports from parents about a variety of factors related to health and life experiences. Predictably, researchers found the greatest correlation between life experiences and DNA methylation changes when the child was under the age of three. 

Other interesting stories about neurology and psychology include one study about the brain processes involved in decision-making, another study that identifies the part of the brain responsible for how we process pain and use it to learn to avoid pain, a study showing that (at least among ravens) bad moods really are more contagious than good moods, and finally, some new information that may help explain the cause of autism. (Spoiler: it’s certain genetic mutations) I’m just sharing the links rather than the full stories here in the interest of time, but there’s some fascinating stuff there.

DogHere’s another story about genetics, although this one is really just a fun fact. It would appear that your genes determine your likelihood of having a dog. Apparently, this study didn’t look at other types of pets, but I’d be interested to know if this means that pet preference is genetic in general. The study, or at least this article about it, seemed to be more interested in the anthropological aspect of dog ownership because it talks more about the history of the domestication of dogs than about the relationship between humans and animals in general. Another question that I have is how the researchers accounted for the possibility that it’s the early childhood environment and not genetics that determines pet preference. I am sure that my love for cats was initially due to the fact that I was practically adopted at birth by a cat and he was a very significant (and positive) aspect of my early childhood experience. Although this is just anecdotal evidence, I have noticed that many cat lovers grew up in households with cats and many dog lovers grew up in households with dogs. But I digress. 

I seem to have already established the pattern of focusing on nutrition and neurobiology over all other kinds of science, but I do have a couple other stories to mention. For one thing, artificial intelligence isn’t the only way in which technology is learning to replicate nature. Now we’ve got artificial photosynthesis, too. We’ve also got some new planets. Eighteen of them, to be exact! But don’t worry; I don’t think anyone is expecting us to memorize their names. They’re not in our solar system. And here’s one final bit of science news: As of May 20, the word “kilogram” has an updated definition. The newly defined kilogram is almost precisely equal to the old kilogram and this change will not have an effect on people’s everyday lives, but the metric system’s measurement of mass is now based upon a mathematical constant (Planck’s constant, to be specific) rather than on an arbitrary object. (A metal cylinder called Le Grand K, which is kept in a vault in France) 

So that’ll be it for now. Coming up next time (depending upon what I may find between now and then that’s even better) are some stories about the Mona Lisa, pentaquarks, and developments in weather forecasting.

 

Science News from April 2019

Leave a comment

This is something I’ve been thinking about doing for a while now. The idea is that I’ll keep an eye out for recent studies or other science news and then post a monthly summary of the most interesting stories. Usually, the goal will be to post these at the beginning of the month because they’ll be about news from the previous month. But it took me a while to get around to doing this first one. I’m sticking to April 2019 news even though we’re already more than halfway through May.

I’ll be honest here; my real reason for starting this new blog post series is to look up random stuff online and call it “research” instead of “wasting time reading random articles”. However, it would also be great if someone else out there stumbles across my blog and learns something that they might not have otherwise seen. I should probably include a bit of a disclaimer acknowledging that I don’t have a background in science beyond a couple general education classes in undergrad. Also, I’m only including things that I personally find interesting, which means that the content will be skewed towards some topics more than others. For example, there’s going to be a lot more about neuropsychology than about ecology. This particular month, it’s a lot about food and brains. I encourage reader interaction; if you’ve come across some interesting science news that I haven’t included here, please leave a comment!

Black HoleThe biggest science news of April 2019 is a picture of a fuzzy orange circle against a black background. It’s been around for more than a month now, so the hype has faded a little, but you probably were seeing it all over the internet a few weeks ago, usually accompanied by the name and headshot of Katie Bouman, the 29-year-old computer scientist who played a key role in taking this picture. (In fact, this image is the result of many people’s efforts over the course of many years) But as much hype as that fuzzy orange circle is getting, it’s all well-deserved, because it’s a real photograph of a mysterious and fascinating phenomenon that we know and love from science fiction. We Earth people now have seen a black hole. 

A black hole, which is presumably caused by the collapse of a supermassive star, is an area of space with such a strong gravitational force that even light cannot escape from it. It’s actually the opposite of a hole; rather than being empty space, it’s an area of extremely condensed mass. The existence of such phenomena was suggested as early as 1783 by John Michell, a British academic whose writings and professions cover an impressive array of disciplines. (His various roles and titles at Cambridge alone include arithmetic, theology, geometry, Greek, Hebrew, philosophy, and geology; he was also a rector in the Anglican church and a relatively prolific writer of scientific essays) The idea of black holes didn’t get much attention until Albert Einstein’s general theory of relativity came along in 1915, describing how matter can bend spacetime and suggesting that such a thing as a black hole could exist. However, even then, the term “black hole” wasn’t in common usage until around 1964, and black holes basically belonged in the realm of theoretical physics and science fiction at that point and for a while afterwards.

If you look up timelines showing the advances in our knowledge of black holes, there are plenty of landmarks over the course of the last four or five decades, and some of these developments have resulted in images that show the effects of a black hole in some way, shape, or form. But the picture produced by the Event Horizon Telescope on April 10 of this year is the first actual photograph to directly show a black hole. The Event Horizon Telescope is actually several (currently eight) telescopes around the world, synchronized by the most advanced technology that computer science has to offer.

In other astronomy news, planetary scientists say that we’ll have a once-in-a-millennium encounter with an asteroid in about ten years, and it’ll happen on Friday the 13th. (April 13, 2029, to be exact) We’re told there’s no cause for concern; it won’t hit the Earth. This asteroid is just a fun fact, so let’s move on to the extremely important topic of food.

Greek saladPeople have been noticing for a while that the “Mediterranean diet” seems to be healthier than the “Western diet”. Although there are some organizations out there that have put forth very specific definitions of what constitutes the “Mediterranean diet,” the basic gist is that American food includes more animal-based fats, whereas the cuisine in places like Greece and southern Italy has more plant-based fats, especially olive oil. Proponents of the Mediterranean diet often point out the significance of cultural factors beyond nutrition. Our eating habits in America tend to prioritize convenience over socialization, while the idyllic Mediterranean meal is home-cooked, shared with family and friends, eaten at a leisurely pace, and most likely enjoyed with a glass or two of wine. I mention this because it has been suggested that a person’s enjoyment of their food actually impacts the way in which their body processes the nutrients. In other words, food is healthier when you enjoy it.

In this particular study, that factor wasn’t taken into consideration and probably didn’t play a role. (The socialization aspect and the wine consumption weren’t mentioned) But researchers did establish that monkeys who were fed a Mediterranean diet consumed fewer calories and maintained a lower body weight than monkeys who were fed an American diet, despite the fact that both groups were allowed to eat whatever amount they wanted. The implication is that the Mediterranean diet, and perhaps plant-based fat specifically, is the key to avoiding overeating. 

On another nutrition-related topic, it turns out that protein shakes aren’t as great as many people think. While it’s true and well-established that a high-protein, low-carb diet is effective for building muscle mass, there are drawbacks. Of course, general common knowledge has long dictated that a varied and balanced diet is good, but it turns out that too much reliance on supplements can actually be dangerous in the long run. Essentially, protein supplements can negatively impact mood, lead to a tendency to overeat, and eventually cause weight gain and decrease lifespan. Even if you’re a bodybuilder, you’re better off getting your protein from regular food than from protein drinks and protein bars. 

CoffeeLet’s move on from foods to beverages. Scientists have suggested that taste preferences could be genetic, and some kind-of-recent studies have backed up that theory. But this recent study from Northwestern University, which focused on a few specific beverages classified by taste category, didn’t reveal many genetic correlations. Instead, it appears that drink preferences are based more on psychological factors. In other words, I don’t love coffee because I love the taste of bitterness; I love coffee because I love the caffeine boost. Another study suggests that I don’t even need to taste my coffee to benefit from it. The psychological association between coffee and alertness means that our minds can be “primed” (that is, influenced) by coffee-related cues, like seeing a picture of coffee. In this study from the University of Toronto, participants’ time perception and clarity of thought was affected just by being reminded of coffee. (You’re welcome for the coffee picture I’ve included here)

I’ve come across a couple other interesting brain-related articles. For one thing, there have been recent developments in the understanding of dementia. We’ve known for a while that Alzheimer’s disease is correlated with (and probably caused by) the accumulation of certain proteins in the brain, but researchers have now identified a different type of dementia caused by the buildup of different proteins. In the short term, this isn’t a life-changing scientific development; this newly acknowledged disorder (called LATE, which stands for Limbic-Predominant Age-related TDP-43 Encephalopathy) has the same symptoms as Alzheimer’s and doctors can only tell the difference after the patient’s death. But in the long run, this new information may help doctors forestall and/or treat dementia.

Meanwhile, researchers are working on developing the technology to translate brain activity into audible speech. The idea is that this will give non-verbal people a way to speak that is a lot more user-friendly and authentic than what exists now.

In other neurological news, the long-standing debate about neurogenesis seems to have found its answer. The question is whether our brains continue to make new brain cells throughout our lives. Some neurologists argue that, once we’re adults, we’re stuck with what we’ve got. In the past, researchers have looked at post-mortem brains and seen little or no evidence to indicate that any of the brain cells were new. But in this study, the researchers made sure that their brain specimens were all from people who had only recently died. This time, there were lots of brain cells that appeared to be brand new. The brains without these new cells were situations in which the deceased person had Alzheimer’s; evidently, dementia and neurogenesis don’t go together. (The question is whether dementia stops neurogenesis or whether dementia is caused by a lack of neurogenesis. Or perhaps neither directly causes the other and there are factors yet to be discovered.)

In somewhat less groundbreaking neurology news, one study from the University of Colorado has shown yet again that extra sleep on the weekend doesn’t make up for sleep deprivation during the week. (This article makes it sound like a new discovery, but medical science has been saying this for a while.) 

Name one thing in this imageAll of this neuroscience stuff makes me think of a picture I saw that was originally posted on Twitter a few weeks ago. Apparently, it attracted a good deal of attention because a lot of people found it creepy. The picture looks like a pile of random stuff, but none of the individual objects are recognizable. Psychologically, that’s just weird. It brings to mind the intriguing psychological phenomenon known as the uncanny valley.

Uncanny ValleyThe uncanny valley refers to the creepy feeling that people get from something non-human that seems very humanlike. For example, robots with realistic faces and voices are very unsettling. If you don’t know what I mean, look up Erica from the Intelligent Robotics Laboratory at Osaka University. Or just Google “uncanny valley” and you’ll easily find plenty of examples. Although the concept of the uncanny valley generally refers to humanoid robots, the same thing is true of other things, like realistic dolls or shadows that seem human-shaped. It’s why most people actually find clowns more creepy than funny, and it’s one of several reasons that it’s disturbing to see a dead body. The term “uncanny valley” refers to the shape of a graph that estimates the relationship between something’s human likeness and the degree to which it feels unsettling. Up to a certain point, things like robots or dolls are more appealing if they’re more human-like, but then there’s a steep “valley” in the graph where the thing in question is very human-like and very unappealing. This tweeted picture of non-things isn’t quite the same thing because it doesn’t involve human likeness. But there’s still something intrinsically unsettling about an image that looks more realistic at a glance than it does when you look more closely.

So what’s the deal with this picture? It was probably created by an AI (artificial intelligence) computer program designed to compile a composite image based on images from all over the internet. Essentially, the computer program understands what a “pile of random stuff” should look like, but doesn’t know quite enough to recreate accurate images of specific items. This is an interesting demonstration of the current capabilities and limitations of AI technology. Essentially, AI programs mimic a real brain’s ability to learn. These programs can glean information from outside sources (like Google images) or from interactions with people. They then use this information to do things like create composite images, design simulations, and to mimic human conversation, whether text-based or audio.

Only relatively recently have AI programs been in common usage, but this technology now plays a large role in our everyday lives. Obviously, there are devices like Siri and Alexa that are capable of actual human conversation, and self-driving cars are now becoming a reality. Technically, things as mundane as recommendations on Netflix or Amazon are examples of AI, and AI programs are used for simulations and analysis in areas such as marketing, finances, and even health care. Recently, medical researchers have found that AI is creepily accurate at predicting death. And science is continually coming up with ways to advance AI technology. (I admittedly don’t understand this article explaining that magnets can make AI more human-brain-like, but it’s interesting anyway) 

In the interest of time, I’m going to end this blog post here. It’s gotten very long even though I’ve cut out probably about half of the stories I’d intended to include. If all goes as planned, I’ll have another one of these to post in a couple weeks.

Are Christians Hypocrites?

4 Comments

cross in handsOn a regular basis, somebody who’s famous and Christian does something scandalous that leads people to question their values. The obvious current example is the story of Josh Duggar, which came to light this past week and is probably the biggest story to hit the news since the events in Baltimore several weeks ago. On the one hand, it’s sad that the media plays up these stories, often doing so in a cynical way and casting a bad light on Christians in general. On the other hand, though, you can’t really blame them; it really is a big story when someone who’s famous for their flawless morality does something shockingly immoral. Besides, there are plenty of people out there in the general public who are glad to hear things that allow them to accuse Christians of hypocrisy. Sex scandals are just the big news stories; Christians do other unchristian things, too. Look at the comments on pretty much any online article dealing with religion, and you’ll see a number of complaints that basically boil down to the accusation that Christians are unkind, unloving, or unforgiving, when kindness, love, and forgiveness are supposed to be the whole point of Christianity. I sometimes think that the reason people are quick to point out high-profile hypocrisy is because non-Christians are so annoyed by the self-righteous attitude that they perceive Christians as having in less high-profile scenarios. This, sad to say, is evidently what most non-Christians in our culture see when they look at Christians.

But are they right? Are Christians, across the board, hypocritical? In the wake of the latest big-news sex scandal, this has been the topic of a lot of internet discussion. Some Christians have tried very hard to insist that the answer is no, or even to defend the actions of Josh Duggar. That’s just silly; even he himself has come forward and said that what he did was “inexcusable”. A lot of the discussion on social media points out that, for all the attention being given to the sinner in this case, not much is being said about the girls who were affected. I’m going to give the media the benefit of the doubt and assume that this hole in the coverage is protecting the privacy of these girls, but these people on the internet are right to point out that an apology doesn’t undo wrongdoing. Now I have nothing against the Duggar family, I would even go as far to say that their show is the closest thing there is to wholesome reality TV, but there’s no denying the fact that Josh Duggar committed a sin that harmed people. Nor is he the only Christian to do so.

The term “Christian values” is often used to refer to a set of values that varies slightly depending upon who’s speaking, but probably includes rules such modest clothes, no sex outside of marriage, no getting drunk, little or no swearing, and (maybe) conservative political ideals. But Christian morals, as set forth in the Bible, are more specific than that. Jesus says that even something as minor as an insult is a sin punishable by damnation. (Matthew 5:22 ESV: “But I say to you that everyone who is angry with his brother will be liable to judgment; whoever insults his brother will be liable to the council; and whoever says, ‘You fool!’ will be liable to the hell of fire.”) Verses such as Matthew 5:48 and Deuteronomy 18:13 demand perfection. And, as other verses like Romans 3:23 (“For all have sinned and fall short of the glory of God”) and Isaiah 53:6 (“All we like sheep have gone astray”) remind us, no one is perfect.

Therefore, yes, Christianity is a religion full of hypocrites. The only way to avoid that fact is to inaccurately redefine sin in order to incorrectly deny that we are sinners. (Something that some people and even some entire denominations do seem to do, but that’s beside the point) Otherwise, it is inevitably true that Christians do not live up to the high moral standards that Christianity says is necessary. It’s true of Josh Duggar, it’s true of politicians who get involved in scandals, it’s true of people who shoplift or commit any kind of violent act, it’s true of anyone who’s ever told a lie or said something mean, and it’s true of everyone who’s ever driven above the speed limit. And no Christian who has any kind of understanding of sin and the Law can deny it.

CrucifixBut Christianity isn’t just a list of impossible moral rules, or a harsh statement against people who break those rules. The sinfulness and hypocrisy of Christian people isn’t the end of the story. Yes, sometimes Christians are guilty of making it sound as if that’s the whole point, but it isn’t. Jesus did more than preach sermons about what it means to be a good person. Jesus paid for all of our sins through his death on the cross. That’s the actual point of Christianity. It’s what ought to come to people’s minds when they hear the word “Christian”, rather than a vague concept of “Christian values” or a cynical criticism of the lack thereof. I think that both Christians and non-Christians have a tendency to forget that Christianity is fundamentally about Christ.

In short, it’s true that Christians are hypocrites in the strictest sense of the word. It’s true that we’re all sinners by the definition of our own religion. Most of us have not committed the kinds of sins that make headlines, but none of us live the kind of flawless, wholesome, godly lives that Christians are supposed to live. We’re sinners, but we’re forgiven sinners.

Just another blog post about Kate and William’s baby

2 Comments

Royal FamilyPrince George Alexander Louis is now two days old, and the internet and media have spent those two days being completely fascinated by him. Look at any news website, and chances are, you’ll find several articles, each repeating the others to a large extent, reporting and commenting on every word that the royal family has said about the baby, declaring how proud Princess Diana would be if she was alive, describing what the baby looks like, and letting us know exactly what Kate was wearing the last time she was seen. (Last I heard, it was a blue polka dot dress reminiscent of what Princess Diana wore shortly after the birth of Prince William.) This baby is currently one of the most popular topics on facebook and twitter and tumblr; every fan of the British royal family wants to say something, even if it’s only a generic congratulatory remark (which the royal family will never personally see) or a comment about how exciting this historical event is.

But not everyone is excited. I’ve been surprised at how many facebook statuses (and even a few internet articles) I’ve seen complaining about the hullabaloo and accusing other people of being obsessive and trivial. They’re annoyed to see coverage of the same story everywhere they look, especially when it’s something that will have no impact on their own lives or the lives of their family and friends. Some of them even take the time to post their own opinions online, complaining about the degree of everyone else’s interest.

Royal WeddingIt was the same way when the new prince’s parents got married a little over years ago. The media was obsessed, the general public was fascinated, and an enormous number of people tuned in to watch the royal wedding live on TV, even though, for those of us in North America, it was in the middle of the night. A few of the pictures from the wedding became iconic images in pop culture and the news for the following several months,  and even now, most people can remember  off the top of their heads exactly what the new Duchess of Cambridge’s wedding dress looked like. But yet there were other people who were tired of hearing about the royal wedding even before it happened, who were either annoyed or amused at anyone who was particularly enthralled with the story, and who wished that the media would let the event pass out of the magazines and newspapers as soon as it was over.

I’m not among those people who pulled an all-nighter to watch the royal wedding live, and I’m not one of those people who has been fanatically keeping track of every detail of Kate’s pregnancy. In fact, I hadn’t remembered offhand even approximately when the royal baby was expected to be born. But I enjoyed seeing after-the-fact online news about the royal wedding, and I’ve enjoyed keeping an eye on the news regarding the new prince over the last couple days, and I personally think it’s wonderful that the media is so excited about the life events of Kate and William and their new son.

Most of what we see in the news is about war, crime, death, political controversies, economic problems, devastating natural disasters, and other tragedies and problems. In general, those kinds of things are bigger news than births and marriages and personal accomplishments of individual people. It’s refreshing and reassuring to see that sometimes, it is possible for good news to be major news. It’s important that, in between being sad about the problems of this world and being upset about the controversies in this world, we can also be happy about the highlights of the lives of famous people. Sharing enthusiasm for these kinds of things draws people together in the same way that political campaigns pull people apart. The fact that Kate and William are likable public figures (and that they make a really cute couple) only adds to the appeal of the news stories that involve them.

Also, he's really, really cute.

Also, he’s really, really cute. I mean, seriously, why wouldn’t you want to see this all over the internet?

It’s pretty clear that Prince George is going to permanently remain in the public eye and on our facebook news feeds. Once the excitement of his birth passes, he won’t be as important to the media as he is now, but we’re still going to be hearing about his first steps and his first words and every childhood landmark that he passes. And although I’m probably not going to be actively seeking out that information, I’ll be interested to see it when other people choose to talk about it on the internet.

Eleven Years Later

Leave a comment

Eleven years ago today, it was a beautiful sunny day very much like today. My sister and I had been spending the afternoon playing with dolls in the basement until my father came home unexpectedly early and informed us that there had been a terrorist attack that morning, and that people had died in New York City, at the Pentagon, and in Pennsylvania. We hadn’t had the television or radio on that day and hadn’t heard about the attacks until then. But for the rest of the afternoon, until the prayer service at church that evening, we watched the news coverage on television, even though they were just showing the same couple video clips over and over and over again. Somewhere in our house, I think we still even have the September 12, 2001 issue of the Omaha World-Herald, although it quickly became tattered from being read so frequently.

I learned a lot that day and on subsequent days. I hadn’t even known the definition of certain words like ‘hijack’ and although I had heard the word ‘terrorist’ before, I hadn’t remembered what it meant. I had never heard of al-Qaeda and knew nothing about Afghanistan besides its location. I hadn’t known anything about the World Trade Center and the word ‘pentagon’ meant nothing to me besides the name of a shape that had either five or six sides; at that time, I couldn’t remember which it was. Even though I had been fascinated by politics since the time of the 2008 presidential election, I had never paid much attention to anything involving foreign policy, and to me, the most significant thing about the government had been the way elections worked. It was something new to read and see news stories about political people doing political things that involved issues more serious than whose name and face we needed to add to our poster of all the American presidents.

The events of September 11 didn’t directly affect me personally. Although I was very frightened and disturbed at the time, and even had a phobia of airplanes for a little while, although I paid a lot more attention to current events from then on, and although it did change the way I thought about politics and patriotism, that was really the extent of September 11’s impact on my life. I didn’t know anyone who died that day. My memories of that day are an insignificant anecdote, which I remember only because I (and people in general) tend to remember major events in terms of minor personal details. I’m sure that my family will always tell the story of how my little sister responded to the news by asking if any windows had been broken. But for many, many people, September 11, 2001 is not an anecdotal memory; it was not an ordinary day where something big happened far away. It was the day when loved ones died, when they directly witnessed a catastrophe, when their world changed in ways that went beyond the political and social ramifications of the attacks.

Those are the reasons that we commemorate September 11. Today is not a political observance, nor is it just a day to remember what that day eleven years ago was like for us personally. The people who died on September 11, 2001 are still dead now, and their families and friends who we prayed for then are still grieving the loss of their loved ones now. They are the reason that we observe the anniversary of those attacks eleven years ago.