Science News from August 2019

1 Comment

Just in case you’re wondering, I do have some legitimate excuses for taking so long to write and post this. It’s been a pretty crazy few weeks, thanks to a sixteen-day-long sinus infection, (it seemed like a lot longer than that) the demise of my car, and just general life/work things. But I had this blog post almost ready to go two weeks ago, so it’d be silly to just scrap it. So at this point, I’m just posting what I’ve got; I apologize if it’s a little disorganized and inadequately edited. 

Klingon Bird of PreyI’d like to start by pointing out that we are slowly getting closer to living in a universe where Star Trek technology is a reality. Unfortunately, this isn’t about transporter beams. (Although I’ll come back to that in a few paragraphs) It’s about cloaking devices, which isn’t even Earth technology; hence my choice to use a picture of a Klingon space vessel.  Although I’m most familiar with the original series, the internet informs me that the use of cloaking technology would violate the Treaty of Algeron, signed by the Federation and the Romulans in the year 2311. However, our 21st-century earth physicists have been indicating for a few years now that invisibility is plausible. This article,  which is more than a year old, describes some of the more promising developments towards the goal of manipulating light waves to make things invisible. The article references Harry Potter instead of Star Trek, though, which is just silly because the invisibility cloak in Harry Potter is clearly magic. 

This new article also mentions Harry Potter, although the technology that it describes is even less similar to a magical invisibility cloak. This time, instead of blocking light waves, the researchers are working on blocking water waves. It’s a little less exciting, maybe, but it has practical applications. Obviously, it’s useful for anything that has to do with boats. The article quotes one physicist as joking that their work will make it easier to have coffee on a boat, but I’m not sure that’s so much a “joke” as it is a worthy pursuit. What makes this development especially interesting, though, is that it essentially works the same way that a light-wave-blocking cloaking device would. 

So what about the transporter beams I mentioned earlier? Well, as you probably realize, real Star-Trek-esque teleportation technology doesn’t exist and isn’t likely to reach us anytime soon. I only mention this development in the context of transporter beams because of this somewhat clickbaitish headline and the Star Trek references in the article. In actuality, this story is about communication and computer science. (In other words, it has more relevance to Uhura than to Scotty)

MelchizedekUntil now, quantum computers have used qubits, which is basically the same thing as a bit for regular computers. A bit is the smallest unit of computer information storage; it represents a single 0 or 1 in binary code. 8 bits make one byte, which is the amount of computer storage needed for a single letter, (or other character) and most computer files take up at least several hundred kilobytes, (aka KB) each of which is actually 1024 bytes. This random (but adorable) picture of my cat Melchizedek takes up over 102 KB on my computer; it’s 105,404 bytes, which is 843,232 bits. The point of all this is that a bit or qubit is a very small thing. Again, it stores one digit of binary code. The news story here is that now there’s such a thing as a qutrit, which stores a digit of ternary code instead of binary. While binary code has two possible values for each digit, ternary code has three possible values, and therefore, a qutrit is slightly bigger than a qubit or a bit. 

I’ve just about run out of reasonable excuses for Star Trek references, but I do have a couple other things to share that have extraterrestrial subject matter. This first one stays pretty close to home. Astronomer David Kipping has suggested that we could make a giant telescope by essentially using the earth’s atmosphere as the lens. We’d just need a spaceship in the right place and equipped with the right devices; the refraction of the light is a natural phenomenon. You can see Kipping’s paper here.

170613_Jupiter_FullMeanwhile, five of Jupiter’s recently-discovered moons have been named. The names were chosen via an online contest, but the options were somewhat limited, since it has already been established that all of Jupiter’s moons must be named after figures from Greek or Roman mythology who were either lovers of Zeus/Jupiter or descended from him. (While there are some differences between the Greek and Roman myths besides the gods’ names, it’s still fair to consider Zeus and Jupiter to be essentially equivalent from an astronomical standpoint) These five moons each get their name from a Greek goddess who is either a daughter or granddaughter of Zeus. Their names are Pandia, Ersa, Eirene, Philophrosyne, and Eupheme. For a complete (and apparently up-to-date) list of moon names, click here. Just to be clear, the picture I’ve used here is Jupiter itself, not a moon.

Moving on to a different scientific field, I’m happy to report that science has confirmed that chocolate is good for you, especially dark chocolate. This cross-sectional study of over 13,000 participants indicates that eating some dark chocolate every other day could reduce the likelihood of having depressive symptoms by 70 percent. (Google Docs put a squiggly line under the phrase “every other day” and suggests that I meant “every day”. Maybe Google knows something that the rest of us don’t.) In other nutritional news, the flavonoids found in apples and tea evidently lower the risk of cancer and heart disease.  And this pilot study has suggested that pregnant women would do well to drink pomegranate juice or consume other foods high in polyphenols (a category of antioxidants) in order to protect their prenatal children from brain problems caused by IUGR, (intrauterine growth restriction) a very common health issue which often involves inadequate oxygen flow to the developing brain. A larger clinical trial is already underway.

A recent study indicates that there could be a genetic link between language development problems and childhood mental health problems. Educators have noted in DNAthe past that kids with developmental language delays are often the same kids with emotional or behavioral problems, but as the article says, it’s always been generally assumed that the reason for this is causal; the idea is that the frustration or confusion that comes from language difficulties causes (or exacerbates) emotional issues. But this study used statistical analysis of participants’ genetic data to evaluate the correlation, and researchers think that both types of problems could be caused by the same genes.

Interestingly enough, another potential cause of childhood mental health problems is strep throat. This article isn’t really new information; clinicians have been studying PANDAS (pediatric autoimmune neuropsychiatric disorder associated with streptococcal infections)  for a couple decades, and there is still debate over what is actually going on in the patients’ brains, whether PANDAS should be considered a form of OCD or a separate disorder, and whether PANDAS is even a real thing. The article linked above describes one specific case and describes some of the questions and research. 

Here’s another one about child development. A researcher in Australia developed a preschool program based entirely upon music and movement. The idea is that these kinds of activities are such effective tools in early childhood education that they can help to close the achievement gap between children from different socioeconomic situations. This is one of those cases where common knowledge is a little ahead of official scientific research. As a children’s librarian, I can verify that virtually anyone who works with kids knows that singing and dancing helps kids learn. There are several reasons for this, ranging from “it activates numerous brain regions at the same time, thereby forming connections that aid both memory and comprehension” to “it burns off excess energy and keeps the kids out of trouble”. For these and other reasons, most preschool programs (and, of course, library storytimes) involve a lot of songs, most of which have their own dances or hand movements. Besides that, school-age kids who take classes in music and/or dance tend to be more academically successful than those who don’t. As far as I know, nobody has ever done the research to officially confirm that, but lots of people in education and the arts are aware of it. (If there’s anyone reading this who has the academic background and the means to do such a study, I’d like to request that you pursue that. Please and thank you.)

ReadingThe next two studies I want to mention are so closely related in their subject matter and findings that I initially thought that these two articles are about the same study. But they aren’t. Although they both involved using MRI technology to watch what someone’s brain does while they read, this one from scientists in England was looking at how the brain translates markings on paper into meaningful words, while this study from the University of California, Berkley compared the brain activity in subjects who read and listened to specific stories. The most conclusive finding from that particular study was that, from a neurological perspective, listening to audiobooks or podcasts is essentially equivalent to reading with your eyes, which almost seems to contradict the England study, which obviously showed that visual processing is the first neurological step in reading. That’s not really a contradiction, though, when you stop to think about just how busy the brain is when reading.

Neurologists have always known that reading is a complex cognitive process that doesn’t have its own designated brain region; it requires cooperation between several different cognitive processes, including those that process visual input, those that relate to spoken language, and higher-order thinking. (That is, our conscious thoughts) But these recent studies have given neurologists a clearer map of written language’s path through the brain. Both of these studies also indicate that the exact brain activity varies slightly depending upon what the participant is reading. In fact, in the study from the University of California, researchers could predict what words the person was reading based upon brain activity. It would seem that our brains associate words with each other based both upon how that word is pronounced and what it means. These studies have some very useful potential applications, such as helping people with dyslexia or auditory processing disorders. In the meantime, they provide us with a couple new fun facts. As far as I’m concerned, fun facts are pretty important, too.

Science News from July 2019

Leave a comment

This one is going to be a little long, guys; I’ve been sick and home alone for the past two days. I spent most of that time asleep, but this blog post is my feeble attempt to feel like those two days weren’t totally wasted. 

Once again, most of my favorite science news stories from this past month have been about small children and/or the brain. (I’ve had to cut the list down to just a few things because I couldn’t help rambling about some of them) So I’m going to start with the couple stories that don’t fit those themes. These first ones are both about particle physics. First, here’s some information about the enigma that is dark matter. Nope, physicists haven’t figured it out yet. We still don’t know what exactly dark matter is, except that it has gravitational effects on regular matter. But these scientists say that, based upon the most plausible explanations of dark matter that we have to go on, it’s not something that’s going to kill people. Click here to read their paper, which is much more technical than the article linked above.

Quantum Computer

IBM quantum computer

Meanwhile, here’s the latest news in quantum mechanics and its application in computer science.  Also, you might be interested in looking at this other article from a few weeks earlier, which gives more information about what a “quantum computer” is. I honestly find it pretty confusing. Quantum mechanics seems like a very abstract, theoretical thing to me, so I can’t quite wrap my mind around the fact that it has applications in something as real and concrete as computer hardware. 

Moving on to neuroscience, let’s talk about sensory processing, specifically our sense of hearing. This study indicates that humans respond differently to music than monkeys do. Essentially, humans are more neurologically inclined to listen specifically to musical tone, while monkeys perceive music as just another noise. Interestingly enough, this research project was inspired by similar research on visual perception that produced very different results. Monkeys evidently do interpret visual input in the same way that humans do, and in fact, their reaction to non-musical sounds is also equivalent to that of a human. So, to put it simply, music appreciation itself is perhaps specifically a human trait. If this experiment is replicated with other types of animals, I’d be interested in those results. 

Meanwhile, a completely unrelated study has demonstrated that there’s a difference between listening to someone’s voice and listening to the words they’re saying. And no, the difference isn’t a matter of paying attention. In this experiment, participants’ brains literally processed vocal sounds differently depending upon whether the subject had been asked to distinguish different voices or to identify the phonemes. (A phoneme is an individual sound, such as /p/ or /t/ ) Rather than using actual words, researchers used meaningless combinations of phonemes, like “preperibion” and “gabratrade”, presumably to isolate the auditory neurological processes from the language comprehension neurological processes. This experiment was all about the sounds, not semantics. Still, I would imagine that the most relevant takeaway from this study is related to language. In order to function in society, we need to instinctively recognize a familiar word or phrase by its phonemes regardless of who is saying it. Otherwise, we would have to relearn vocabulary every time we meet a new person.

LanguageThis leads me to some other interesting information about language and linguistics. The Max Planck Institute for Psycholinguistics suggests that the complexity of the grammar in any particular language is determined by the population size of the group in which the language developed. (That is, languages with more speakers have simpler grammatical rules) This isn’t a new theory, in fact, it’s fairly well established that there’s a correlation between a language’s simplicity and its current population of speakers. But for the social experiment described in this article, the question is how and why this is the case.

Participants in this study had to make up their own languages over the course of several hours. They played a guessing game in which people paired up and took turns using made-up nonsense words to describe a moving shape on a computer screen while another person had to try to figure out what the shape was and which direction it was moving. They would switch partners occasionally, resulting in groups of different sizes even though the communication was only between two people at a time. Initially, both the descriptions and the guesses were completely random, but eventually, the participants would develop some consistency of vocabulary and grammar within their group.

The results were as expected. Larger groups developed consistent, simple, and systematic grammatical rules because communication was impossible otherwise. Some of the smaller groups didn’t even get as far as developing grammatical rules at all, just some vocabulary, and those that did use some form of grammar didn’t necessarily have very logical or clearly defined rules. You can make do without consistent grammatical rules when you’re only trying to coordinate communication between a few people. Additionally, the smaller groups produced languages that were entirely unique, while the larger groups tended to develop languages that resembled those of other groups.

Of course, it’s questionable whether an experiment like this is an accurate representation of real-world languages. Languages develop over many generations, not just a few hours, and even uncommon languages have thousands of speakers. (Yes, there are some languages that only have a few living native speakers left, but in those cases, the language used to have many more speakers and has been gradually dying out. The number relevant to this research isn’t the current amount of native speakers, it’s the population size in which the language originated.) It’s an interesting experiment, though, and it’s at least plausible that it did an accurate job of replicating the social processes by which a language’s grammar is formed.

Letter blocksThis research from Princeton University looks at a different subfield of psycholinguistics: language acquisition in early childhood. Researchers were curious about how a toddler figures out whether a new vocabulary word has a broad or specific meaning. For example, if you show a child a picture of a blowfish and teach him or her the word “blowfish,” will the child mistakenly apply that word to other types of fish, or will the child understand that the word “blowfish” only refers to the very specific kind of fish in the picture? (The article uses this example to coin the term “blowfish effect”) Toddlers are surprisingly good at this. Sure, they get it wrong every now and then, but think about it. How often have you heard a child say “bread” when they’re talking about some other type of food, or “ball” when they’re talking about some other type of toy? Those kinds of mistakes don’t happen nearly as often as they would if this type of learning was entirely trial-and-error.

Apparently, the prevalent theory has previously been that children always learn general words before learning specific words, presumably because the general words tend to be more commonly spoken and shorter. (I deliberately used only one-syllable examples at the end of the previous paragraph, but it is true that one-syllable nouns tend to have more general meanings than multi-syllable nouns) However, as anyone who spends much time around children can tell you, some children do learn some very specific words, most often nouns, at a very young age. I’ve seen children as young as three accurately identify specific objects, as represented by toys or pictures, such as different types of construction vehicles or different kinds of dinosaurs. In fact, in my experience, a small child is more likely to confuse two words with specific meanings (such as “tiger” and “lion”, or “bulldozer” and “dump truck”) than to confuse a general word with a more specific word. (Such as “tiger” and “cat” or “bulldozer” and “truck”) So there’s clearly some method to distinguishing general terms from specific terms, and it makes sense to study how kids learn these things.

Researchers evaluated this by using pictures to teach made-up one-syllable words to children and then asking the children to identify more pictures that correspond to that word. They did the same thing with young adults and found that the toddlers and the adults performed similarly. There seemed to be two rules that determined whether the subject interpreted the word as a specific term or a general term. First, the person uses what knowledge they have of the outside world to determine whether the thing in question looks normal or unusual. A blowfish is a distinctive looking creature, so a child is more likely to associate its name with the specific species than to extrapolate the meaning and call other types of fish a “blowfish,” but that same child might perceive a goldfish to be a very normal fish and might mistakenly use the word “goldfish” for other fish.

DalmatianThe other rule has to do with variety. If you show a child a picture of three different kinds of dogs and teach the child that those are called dogs, the child will easily understand that dogs come in a variety of sizes and appearances. But if you showed the same child a picture of three dalmatians and teach them the word dog, that child might think that only a dalmatian is a “dog”. In real life, I would imagine that this applies across a period of time. If the child learns that a certain dalmatian is a “dog” one day and then learns that a certain dachshund is a “dog” a few days later, and then later hears Grandma call her golden retriever a “dog”, that child  will understand that “dog” is a general word that encompasses many animals. But if the child doesn’t know any dogs other than Grandma’s golden retriever, he or she may not recognize a dalmation or a dachshund as a dog for months or even years after learning the word “dog”.

I’ve veered away from the article a little and am making up my own examples now, but the point is the same. Young children use context cues to apply new vocabulary, and they do so using the same cognitive skills that an adult would. Because this happens at a subconscious level, we don’t recognize just how intelligent toddlers are. We’re more likely to be amused at their linguistic errors than impressed at their language acquisition, but it’s pretty incredible just how rapidly children develop vocabulary between the ages of two and about five.

In further baby-related news, researchers have demonstrated that children as young as six months old can display empathy when presented with video clips of anthropomorphic shapes. One such clip shows a circle character and a rectangle character walking together without any bullying behavior, and another clip shows the circle character pushing the rectangle character down a hill, at which point the rectangle cries. (Ya gotta watch out for those circles, buddy!) When the babies were later given a tray containing these anthropomorphic shapes, they displayed a preference for the poor rectangle who had been so cruelly bullied.

BabyThe University of Illinois at Urbana-Champaign has conducted another recent study that also involved making babies watch characters bullying each other. These child researchers are a positive bunch.  This study involved older children (the summary says “infants 17 months of age”, which sounds like a contradiction of terms to me) and evaluated whether children this young perceive social hierarchies and power dynamics. The answer is evidently yes. The researchers measured the babies’ degree of surprise or confusion by how long they stared at the scenario played out before them with bear puppets. If this staring is in fact an accurate way to determine what the baby does or doesn’t expect, then the subjects displayed an expectation that a puppet who acts as a leader will step in to prevent wrongdoing. There were several variations on the skit, and researchers noted that the babies didn’t seem to expect this same corrective reaction from a character who was not presented as a leader.

That just about covers it for the science news about babies’ minds, but I’ve got plenty more stuff about minds in general. For example, we’re a step closer to telepathic multiplayer video games. Also, want to see the most detailed images of a human brain that scientists have ever gotten? Then watch this youtube video. But if you’re having an MRI done anytime soon, don’t expect this kind of image clarity. This scan was done on a post-mortem brain and it took 100 hours.

The research described in this next article has identified the brain area responsible for the uncanny valley. I’ve mentioned the so-called “uncanny valley” before; it’s an interesting phenomenon that has relevance in discussions of technology (specifically AI) or psychology, and I would argue that it has significant applications in literature and the arts. In case you aren’t inclined to look it up or click the hyperlink to my previous blog post, I’ll clarify that the “uncanny valley” refers to the fact that people tend to find non-human things creepy if they look too much like humans.

VMPFCThis recent research demonstrates that this creepy feeling comes from the ventromedial prefrontal cortex, (VMPFC) the brain region that is essentially right behind your eyes. This is not surprising because the roles of the VMPFC have to do with processing fear, evaluating risks, and making decisions. The VMPFC regulates the fight-or-flight reaction generated in another brain region called the amygdala. If you walk into a dimly lit room and see a spooky face staring at you, it’s your amygdala that’s responsible for that immediate, instinctive moment of panic. You feel a jolt of fear, your heart rate suddenly goes up, you probably physically draw back… and then you realize that there’s a mirror on the wall. The spooky face is your own reflection, and it only looks spooky because of the dim lighting. You can thank your VMPFC for assessing the situation and figuring out that there’s no legitimate danger before you had a chance to act on that fight-or-flight response and run away from your own reflection like an idiot. 

But the VMPFC also plays the reverse role. Even when your amygdala isn’t reacting to a perceived danger, your VMPFC is analyzing potential risks for the purpose of making decisions. It considers risks other than immediate danger, which include the long-term effects of a decision or the impact that your actions might have on another person. You may have heard of Phineas Gage, a railroad worker who sustained severe brain damage in 1848 and came out of it alive, but with a completely changed personality. It was his VMPFC that was impaired. As a result, his behavior became volatile and he displayed a loss of conscience. While it would be overly simplistic to say that your ventromedial prefrontal cortex is your conscience, it is the region of your brain that you use when you’re making those types of decisions. It’s also responsible for considering risks when you’re making decisions that could impact intangible and non-immediate things like your relationships, social standing, or financial situation. These are all things that are probably very important to you, but because they’re conceptual and don’t involve immediate physical danger, situations that pose a threat to them usually don’t generate that fight-or-flight response from your amygdala. 

RobotAll of this is to say that it stands to reason that the VMPFC is the brain region responsible for generating the general feeling of unease described by the uncanny valley. If you’re having a conversation with a machine that has artificial intelligence and a human-like appearance, you’re not in immediate, obvious, physical danger. Unless you’ve previously had some kind of traumatic experience with AI-endued robots, your amygdala will probably not be generating the kind of fear that makes you want to run away screaming. But the situation is bizarre enough that your VMPFC remains wary. It’s anticipating the possibility of risks and decisions that require conscious thought rather than instinctive action. And that anticipation translates to what is best described as a creepy feeling. 

Finally, as we all know, you can’t tell how smart a person is by what their brain looks like… right? Well, if you use a special type of MRI technique in combination with mathematical algorithms, you can evidently figure out what the network of neural pathways looks like in a particular brain. As is probably obvious, intelligent thought depends upon the efficiency of these neural pathways. Thoughts and ideas literally move through the brain via electrical and chemical signals, and in order to comprehend a new concept or come up with a bright and original idea, you need various parts of your brain to work together. Sure enough, these researchers determined that people with greater general knowledge had more efficient neural pathways. You still can’t tell how smart a person is just by looking at them, but technically, there is a potentially discernable difference between a brain that is smart and one that isn’t.

Science News from June 2019

Leave a comment

A few weeks ago, I finished my summary of May’s science news by saying that this month’s post would probably include stories about weather forecasts, pentaquarks, and the Mona Lisa. So those are the stories that I’ll use to begin my science news summary for this past month. 

First, here’s the link to an article about the updates in weather forecasting. There’s some question among meteorologists as to whether it’s really ready; it hasn’t yet been as accurate as they’d like. But even if it isn’t quite right just yet, we may be getting close to seeing increased accuracy in weather forecasting. 

And here’s the article about the structure of pentaquarks. Admittedly, I don’t understand the significance of pentaquarks. Yes, I know that they’re subatomic particles consisting of five quarks, and this article further explains that pentaquarks come in two pieces, one of which is called a baryon and is made of three quarks, and the other is a meson consisting of a quark and an antiquark. But what kinds of atoms and molecules have pentaquarks, and what properties distinguish them from atoms and molecules without pentaquarks? I’m not sure, so if you know, please leave a comment below. Here’s another fun physics tidbit: Diamonds may be the key to developing the technology to detect dark matter. 

mona lisa This study about the Mona Lisa wasn’t so much of a scientific experiment as an analysis of the facial expression in the painting. As such, it’s nothing really groundbreaking; that painting has been around for over five hundred years. In fact, the researchers’ conclusions sound very familiar to me and I think that other people have said similar things in the past. But it’s still an interesting read. This article describes how a group of researchers led by a neurologist from the University of Cincinnati took a close look at Mona Lisa’s famous smile and said that it is probably a non-genuine smile, as evidenced by its asymmetry. Even if this was a new revelation, it wouldn’t be a big surprise because it’s normal for people to fake-smile for pictures. But some people are speculating that Da Vinci was intentional in depicting an insincere smile and that there may be “cryptic messages” to glean from it. The article suggests that perhaps this was a self-portrait or maybe it “referred to a man or a dead woman”, although it doesn’t explain why a fake smile would be evidence of those theories. 

Speaking of cryptic messages, this seems like a good segue into the fascinating topics of dreaming and sleep. This is scientific subfield that I find especially interesting, and I felt that way even before coming to the realization that my own sleep is abnormal. (Yes, I’ve been to sleep specialists, and yes, they’ve confirmed that I’m weird, and no, they don’t know why that is or what to do about it.) This study conducted in Finland had somewhat disappointing results in that researchers were not able to detect participants’ dreams by monitoring brainwaves. That certainly seemed like a realistic goal. Sleep and dreaming are neurological processes, and in fact, brainwave patterns are the difference between different sleep stages. We’ve known for a long time that most dreaming happens in REM sleep, which is characterized by brain activity similar to wakefulness even though the rest of the body is at its deepest level of sleep. (REM stands for Rapid Eye Movement, which is another thing that differentiates REM sleep from nREM, or non-REM sleep.) But it’s not a direct correlation. The brain doesn’t necessarily always dream in REM sleep, and some dreaming does occur in nREM sleep. There’s still more to learn about why it works that way. 

 

Why We Sleep book Matthew Walker

Why We Sleep: Unlocking the Power of Sleep and Dreams by Matthew Walker, 2017.

I happen to be in the middle of reading a book about sleep, and as it so happens, Matthew Walker, the author of that book, is one of the researchers who conducted this study on sleep and Alzheimer’s disease. In fact, the chapter that I just read was about observations that are backed up by this study. The basic gist is that changes in sleep patterns appear to be an early indicator of Alzheimer’s disease. This news article is very brief and I can’t access the full text of the journal article itself, but based upon the abstract and the chapter in the book, it would seem that there’s still room for debate about cause and effect. Thanks to relatively recent medical advances, we now know that Alzheimer’s disease is caused by the buildup of certain proteins in the brain, but is that buildup caused by poor sleep quality and insufficient amounts of sleep? Or is it those protein buildups that impair sleep? According to Walker, (at least as of the 2017 publication date of the book) both are probably true. It’s a vicious cycle in which the symptoms of the disease are also what drives the progression of the disease.

 

In other sleep-related news, another study suggested that leaving the light on while sleeping may increase the risk of obesity in women. (The study evidently did not evaluate whether this applies to men) The article isn’t very specific about possible explanations for this, except a line about “hormones and other biological processes”. Since it’s fairly well-established now that inadequate sleep is linked to weight gain, I’m speculating that sleep quality is the significant factor here. It makes sense that artificial light can have an impact on the sleeping person’s sleep cycle, perhaps by making it take a little bit longer to get all the way into deep sleep, or affecting the proportion of REM to nREM sleep.

I’ve come across some other weight-related science stories. While this one doesn’t directly discuss sleep, it continues the theme of brain activity. (If you’re tired of my fascination with the human brain, here’s a fun story about bees’ cognition) Young children’s brains use a very large proportion of their energy intake. At some points in children’s development, their brains are using more than half of their calories, which is pretty amazing even before you stop to think about the fact that those children’s physical bodies are also growing and developing quickly. The article goes on to say that this energy expenditure could mean that education at the preschool level can stave off obesity. But I think that another important takeaway here is that it’s important for young children to be well-nourished. 

CoffeeMeanwhile, for adults, coffee could help fight obesity because of its effect on BAT, (Brown Adipose Tissue) otherwise known as brown fat. The article goes on to describe how brown fat differs from regular white fat and its role in metabolism. As you might guess, caffeine is the reason that coffee has this effect. Although lots of people have long claimed that black coffee combats weight gain, it appears that this study is groundbreaking in demonstrating how this works. (The part about caffeine speeding up metabolism has been common knowledge for a while, but that’s much more vague than the new information about brown fat’s role in the process.) 

Yet again this month, there’s some new information about autism. This study identified a part of the brain in which a lower density of neurons corresponds to certain traits and mannerisms associated with autism, specifically those involving rigid thinking. In that case, this information doesn’t add anything to our limited understanding of what causes autism, but another study indicates that one factor might be Propionic Acid, a preservative often found in processed foods. This article focuses on the effect of Propionic Acid on a developing fetus. I can’t tell from the article whether this information is specific to a certain prenatal phase of development or if it also applies to children after birth. 

In other health-related news, progress is being made in diagnosing Lyme disease, and we may even be approaching a cure for the common cold. For the particularly health-conscious, here are some other things to take into consideration in your everyday life. Certain microbes in the gut may have a positive impact on athletic ability, it’s a good idea for everyone to spend two hours a week out in nature, and antioxidants are actually bad for you. Okay, that’s an unfair oversimplification. But the point is that scientists are coming to the conclusion that antioxidants are not the anti-cancer solution that we’ve thought; they actually could increase the risk of lung cancer because they don’t just protect healthy cells from those harmful free radicals we hear so much about. They also protect cancer cells. Of course, that doesn’t mean that you should avoid foods high in antioxidants. But it does mean that it’s not a good idea to take dietary supplements that give you several times the needed amount of certain nutrients. It’s just one more reason that it’s healthier to rely on food for your nutrition. Incidentally, one such antioxidant is Vitamin C, which has been praised as one of the few nutrients that isn’t bad for you if you consume too much. I guess that’s not true after all. 

BCIRather than ending on that bleak note, I’ll wrap this up by pointing out an impressive technological feat.  Researchers from Carnegie Mellon University have developed a brain-controlled robotic arm that’s headline-worthy as the least-invasive BCI yet. BCI stands for Brain Computer Interface, and it’s basically what it sounds like. The technology actually exists for computers to respond directly to the human brain. I have no idea how it works. Here’s the article, although you should probably ignore the stock image of a computer cursor. If you want a relevant visual, this brief youtube video includes a few seconds showing the actual robotic arm. Sure, it’s not exactly cyborg technology as depicted in movies, but it’s still pretty incredible. 

Science News from May 2019

2 Comments

This is the second installment of what I hope will become a monthly series. I’ve been periodically checking a few different websites (mostly sciencedaily.com and sciencenews.org) and keeping an eye out for interesting science news stories. Although we’re already halfway through the month of June, this blog post only includes stories through the end of May. But I am in the process of collecting more recent content for next month!

My summary of April’s science news included a lot of studies about food and nutrition, so I’m going to start by following that up with this inconclusive study about whether highly processed foods cause weight gain. The article suggests a couple reasons for the lack of a clear yes/no answer. Nutritional needs and metabolism vary from person to person, and it’s still unclear to scientists what all of the variables are. Besides that, nutrition is hard to study because an accurate scientific study requires a controlled environment and detailed data collection, which means there’s a disconnect from real-life eating habits. This article mentions the possible effects of “social isolation, stress, boredom, and the fact that foods are prepared in a laboratory,” but that barely scratches the surface of the possible confounding variables. There’s also the possibility that participants’ eating habits, amount of exercise, or even their metabolism is affected simply by the knowledge that they’re part of a study on nutrition. Here’s another recent study that didn’t confirm a common nutrition “fact”. It would appear that dietary cholesterol doesn’t really cause strokes. The takeaway here is that nutrition is still a relatively new field of study and there’s a lot more to learn. (On a side note, though, apparently blueberries are good for blood pressure)

ChocolateMeanwhile, the University of Edinburgh has been asking the big questions and perfecting the chocolate-making process. And in Munich, they’re studying the scent of dark chocolate. They’ve identified 70 different chemicals whose odors combine to create the distinctive smell of dark chocolate, although only 28 to 30 are really detectable.  And as long as we’re talking about scents, another study showed that people who drink coffee are more sensitive to the smell of coffee. 

Another topic that played a big role in my blog post from last month was artificial intelligence. I have another update to add in that area, too. In the ongoing quest to make AI as similar to the human brain as possible, researchers have noticed that machines with an artificial neural network (as opposed to a conventional computer model, which rely entirely on algorithms and can only “think” sequentially) can have a human-like number sense.

dice-clipart-fiveIf you aren’t entirely sure what that means, let’s use the image on the right as an example. How many dots are there? You probably noticed that there are five dots as soon as you scrolled down far enough to see it, even before you read these words that tell you why this image is here. But you probably didn’t look closely at it and consciously think the words, “One, two, three, four, five.” As quick and easy as it is to count to five, it’s even quicker and easier to just visually recognize the pattern and know that it illustrates the number five. Your brain is capable of doing that without actually counting.  You’re also capable of looking at two different images with a cluster of dots and instinctively knowing which one has more without actually counting. (There’s some debate about whether that’s the exact same skill or just a related skill. My opinion is that it’s different, but there’s obviously a connection)

As I’ve tried to look up more information on visual number sense, I’ve increasingly realized that there are other debates on the topic as well. There’s a variety of questions and opinions about how it works, whether it varies from person to person, and whether it’s an inherent, innate skill or an or acquired skill. But based upon what we know about how people learn to read, and also based upon what this new AI story demonstrates, I think it’s pretty clear that this is an example of neuron specialization. You literally use different neurons to recognize the five-ness of this image than the neurons you would use to recognize a different number. Think of a child learning how to read; first he or she must learn to recognize each letter of the alphabet as a distinct symbol and understand that each one makes a different sound, but then he or she has to learn to do so very quickly in order to be able to comprehend the meaning of whole words and sentences. To become a proficient reader, the child must eventually learn to recognize whole words instantaneously. This learning process usually takes at least three or four years because it actually requires changes in the brain. Not only does it necessitate close cooperation between the neural networks used for vision and conceptual comprehension, it also requires specific neurons to specialize in identifying specific visual cues, such as the letter A or the word “the”.

I could ramble for a while longer about that (I am a children’s librarian, after all) but I’ll leave it at that because my point is just that it makes sense that number recognition works similarly. But it’s a lot easier. The concept of “five” is much more intuitive than the concept that a particular arrangement of squiggles corresponds to a particular grouping of sounds which in turn corresponds to a particular thing or idea. I’m not sure that AI would be capable of learning to read; a computer only comprehends text insofar that it’s been programmed to recognize certain letters, words, or commands. If a programmer makes a typo and leaves out a letter or punctuation mark, the computer doesn’t recognize the command. But based upon this new story about AI number sense, a computer with an artificial neural network can indeed use a process akin to neural specialization to develop human-like visual number recognition.

That might not seem like a scientific advancement, because after all, the one advantage that computers have over human brains is their ability to work with numbers almost instantaneously, whether that means counting or arithmetic or more advanced mathematics. But it’s certainly an interesting story because it validates the similarity between an artificial neural network and an actual human neural network. Also, it gives me an excuse to nerd out about neural specialization and literacy acquisition, which is the real point here.

ToddlerBut speaking of small children, a new study from Massachusetts General Hospital has found what countless other studies have also shown: Early childhood is a very formative phase of life. It has been common knowledge for a while now that personality traits, social skills, intelligence, and even academic potential are mostly determined by the age of five. This particular study was looking at the impact of adversity such as abuse or poverty and it evaluated this impact by looking at the biochemistry of epigenetics rather than behavior or self-reported psychological traits. (Epigenetics describes things that are caused by changes in gene expression rather than differences in the genes themselves. In other words, genetics determine what traits or disorders a person is predisposed to have and epigenetics determine whether that person actually develops those traits or disorders.) Data was gathered from a longitudinal (long-term) study that has been collecting data including both DNA methylation profiles and reports from parents about a variety of factors related to health and life experiences. Predictably, researchers found the greatest correlation between life experiences and DNA methylation changes when the child was under the age of three. 

Other interesting stories about neurology and psychology include one study about the brain processes involved in decision-making, another study that identifies the part of the brain responsible for how we process pain and use it to learn to avoid pain, a study showing that (at least among ravens) bad moods really are more contagious than good moods, and finally, some new information that may help explain the cause of autism. (Spoiler: it’s certain genetic mutations) I’m just sharing the links rather than the full stories here in the interest of time, but there’s some fascinating stuff there.

DogHere’s another story about genetics, although this one is really just a fun fact. It would appear that your genes determine your likelihood of having a dog. Apparently, this study didn’t look at other types of pets, but I’d be interested to know if this means that pet preference is genetic in general. The study, or at least this article about it, seemed to be more interested in the anthropological aspect of dog ownership because it talks more about the history of the domestication of dogs than about the relationship between humans and animals in general. Another question that I have is how the researchers accounted for the possibility that it’s the early childhood environment and not genetics that determines pet preference. I am sure that my love for cats was initially due to the fact that I was practically adopted at birth by a cat and he was a very significant (and positive) aspect of my early childhood experience. Although this is just anecdotal evidence, I have noticed that many cat lovers grew up in households with cats and many dog lovers grew up in households with dogs. But I digress. 

I seem to have already established the pattern of focusing on nutrition and neurobiology over all other kinds of science, but I do have a couple other stories to mention. For one thing, artificial intelligence isn’t the only way in which technology is learning to replicate nature. Now we’ve got artificial photosynthesis, too. We’ve also got some new planets. Eighteen of them, to be exact! But don’t worry; I don’t think anyone is expecting us to memorize their names. They’re not in our solar system. And here’s one final bit of science news: As of May 20, the word “kilogram” has an updated definition. The newly defined kilogram is almost precisely equal to the old kilogram and this change will not have an effect on people’s everyday lives, but the metric system’s measurement of mass is now based upon a mathematical constant (Planck’s constant, to be specific) rather than on an arbitrary object. (A metal cylinder called Le Grand K, which is kept in a vault in France) 

So that’ll be it for now. Coming up next time (depending upon what I may find between now and then that’s even better) are some stories about the Mona Lisa, pentaquarks, and developments in weather forecasting.

 

Science News from April 2019

Leave a comment

This is something I’ve been thinking about doing for a while now. The idea is that I’ll keep an eye out for recent studies or other science news and then post a monthly summary of the most interesting stories. Usually, the goal will be to post these at the beginning of the month because they’ll be about news from the previous month. But it took me a while to get around to doing this first one. I’m sticking to April 2019 news even though we’re already more than halfway through May.

I’ll be honest here; my real reason for starting this new blog post series is to look up random stuff online and call it “research” instead of “wasting time reading random articles”. However, it would also be great if someone else out there stumbles across my blog and learns something that they might not have otherwise seen. I should probably include a bit of a disclaimer acknowledging that I don’t have a background in science beyond a couple general education classes in undergrad. Also, I’m only including things that I personally find interesting, which means that the content will be skewed towards some topics more than others. For example, there’s going to be a lot more about neuropsychology than about ecology. This particular month, it’s a lot about food and brains. I encourage reader interaction; if you’ve come across some interesting science news that I haven’t included here, please leave a comment!

Black HoleThe biggest science news of April 2019 is a picture of a fuzzy orange circle against a black background. It’s been around for more than a month now, so the hype has faded a little, but you probably were seeing it all over the internet a few weeks ago, usually accompanied by the name and headshot of Katie Bouman, the 29-year-old computer scientist who played a key role in taking this picture. (In fact, this image is the result of many people’s efforts over the course of many years) But as much hype as that fuzzy orange circle is getting, it’s all well-deserved, because it’s a real photograph of a mysterious and fascinating phenomenon that we know and love from science fiction. We Earth people now have seen a black hole. 

A black hole, which is presumably caused by the collapse of a supermassive star, is an area of space with such a strong gravitational force that even light cannot escape from it. It’s actually the opposite of a hole; rather than being empty space, it’s an area of extremely condensed mass. The existence of such phenomena was suggested as early as 1783 by John Michell, a British academic whose writings and professions cover an impressive array of disciplines. (His various roles and titles at Cambridge alone include arithmetic, theology, geometry, Greek, Hebrew, philosophy, and geology; he was also a rector in the Anglican church and a relatively prolific writer of scientific essays) The idea of black holes didn’t get much attention until Albert Einstein’s general theory of relativity came along in 1915, describing how matter can bend spacetime and suggesting that such a thing as a black hole could exist. However, even then, the term “black hole” wasn’t in common usage until around 1964, and black holes basically belonged in the realm of theoretical physics and science fiction at that point and for a while afterwards.

If you look up timelines showing the advances in our knowledge of black holes, there are plenty of landmarks over the course of the last four or five decades, and some of these developments have resulted in images that show the effects of a black hole in some way, shape, or form. But the picture produced by the Event Horizon Telescope on April 10 of this year is the first actual photograph to directly show a black hole. The Event Horizon Telescope is actually several (currently eight) telescopes around the world, synchronized by the most advanced technology that computer science has to offer.

In other astronomy news, planetary scientists say that we’ll have a once-in-a-millennium encounter with an asteroid in about ten years, and it’ll happen on Friday the 13th. (April 13, 2029, to be exact) We’re told there’s no cause for concern; it won’t hit the Earth. This asteroid is just a fun fact, so let’s move on to the extremely important topic of food.

Greek saladPeople have been noticing for a while that the “Mediterranean diet” seems to be healthier than the “Western diet”. Although there are some organizations out there that have put forth very specific definitions of what constitutes the “Mediterranean diet,” the basic gist is that American food includes more animal-based fats, whereas the cuisine in places like Greece and southern Italy has more plant-based fats, especially olive oil. Proponents of the Mediterranean diet often point out the significance of cultural factors beyond nutrition. Our eating habits in America tend to prioritize convenience over socialization, while the idyllic Mediterranean meal is home-cooked, shared with family and friends, eaten at a leisurely pace, and most likely enjoyed with a glass or two of wine. I mention this because it has been suggested that a person’s enjoyment of their food actually impacts the way in which their body processes the nutrients. In other words, food is healthier when you enjoy it.

In this particular study, that factor wasn’t taken into consideration and probably didn’t play a role. (The socialization aspect and the wine consumption weren’t mentioned) But researchers did establish that monkeys who were fed a Mediterranean diet consumed fewer calories and maintained a lower body weight than monkeys who were fed an American diet, despite the fact that both groups were allowed to eat whatever amount they wanted. The implication is that the Mediterranean diet, and perhaps plant-based fat specifically, is the key to avoiding overeating. 

On another nutrition-related topic, it turns out that protein shakes aren’t as great as many people think. While it’s true and well-established that a high-protein, low-carb diet is effective for building muscle mass, there are drawbacks. Of course, general common knowledge has long dictated that a varied and balanced diet is good, but it turns out that too much reliance on supplements can actually be dangerous in the long run. Essentially, protein supplements can negatively impact mood, lead to a tendency to overeat, and eventually cause weight gain and decrease lifespan. Even if you’re a bodybuilder, you’re better off getting your protein from regular food than from protein drinks and protein bars. 

CoffeeLet’s move on from foods to beverages. Scientists have suggested that taste preferences could be genetic, and some kind-of-recent studies have backed up that theory. But this recent study from Northwestern University, which focused on a few specific beverages classified by taste category, didn’t reveal many genetic correlations. Instead, it appears that drink preferences are based more on psychological factors. In other words, I don’t love coffee because I love the taste of bitterness; I love coffee because I love the caffeine boost. Another study suggests that I don’t even need to taste my coffee to benefit from it. The psychological association between coffee and alertness means that our minds can be “primed” (that is, influenced) by coffee-related cues, like seeing a picture of coffee. In this study from the University of Toronto, participants’ time perception and clarity of thought was affected just by being reminded of coffee. (You’re welcome for the coffee picture I’ve included here)

I’ve come across a couple other interesting brain-related articles. For one thing, there have been recent developments in the understanding of dementia. We’ve known for a while that Alzheimer’s disease is correlated with (and probably caused by) the accumulation of certain proteins in the brain, but researchers have now identified a different type of dementia caused by the buildup of different proteins. In the short term, this isn’t a life-changing scientific development; this newly acknowledged disorder (called LATE, which stands for Limbic-Predominant Age-related TDP-43 Encephalopathy) has the same symptoms as Alzheimer’s and doctors can only tell the difference after the patient’s death. But in the long run, this new information may help doctors forestall and/or treat dementia.

Meanwhile, researchers are working on developing the technology to translate brain activity into audible speech. The idea is that this will give non-verbal people a way to speak that is a lot more user-friendly and authentic than what exists now.

In other neurological news, the long-standing debate about neurogenesis seems to have found its answer. The question is whether our brains continue to make new brain cells throughout our lives. Some neurologists argue that, once we’re adults, we’re stuck with what we’ve got. In the past, researchers have looked at post-mortem brains and seen little or no evidence to indicate that any of the brain cells were new. But in this study, the researchers made sure that their brain specimens were all from people who had only recently died. This time, there were lots of brain cells that appeared to be brand new. The brains without these new cells were situations in which the deceased person had Alzheimer’s; evidently, dementia and neurogenesis don’t go together. (The question is whether dementia stops neurogenesis or whether dementia is caused by a lack of neurogenesis. Or perhaps neither directly causes the other and there are factors yet to be discovered.)

In somewhat less groundbreaking neurology news, one study from the University of Colorado has shown yet again that extra sleep on the weekend doesn’t make up for sleep deprivation during the week. (This article makes it sound like a new discovery, but medical science has been saying this for a while.) 

Name one thing in this imageAll of this neuroscience stuff makes me think of a picture I saw that was originally posted on Twitter a few weeks ago. Apparently, it attracted a good deal of attention because a lot of people found it creepy. The picture looks like a pile of random stuff, but none of the individual objects are recognizable. Psychologically, that’s just weird. It brings to mind the intriguing psychological phenomenon known as the uncanny valley.

Uncanny ValleyThe uncanny valley refers to the creepy feeling that people get from something non-human that seems very humanlike. For example, robots with realistic faces and voices are very unsettling. If you don’t know what I mean, look up Erica from the Intelligent Robotics Laboratory at Osaka University. Or just Google “uncanny valley” and you’ll easily find plenty of examples. Although the concept of the uncanny valley generally refers to humanoid robots, the same thing is true of other things, like realistic dolls or shadows that seem human-shaped. It’s why most people actually find clowns more creepy than funny, and it’s one of several reasons that it’s disturbing to see a dead body. The term “uncanny valley” refers to the shape of a graph that estimates the relationship between something’s human likeness and the degree to which it feels unsettling. Up to a certain point, things like robots or dolls are more appealing if they’re more human-like, but then there’s a steep “valley” in the graph where the thing in question is very human-like and very unappealing. This tweeted picture of non-things isn’t quite the same thing because it doesn’t involve human likeness. But there’s still something intrinsically unsettling about an image that looks more realistic at a glance than it does when you look more closely.

So what’s the deal with this picture? It was probably created by an AI (artificial intelligence) computer program designed to compile a composite image based on images from all over the internet. Essentially, the computer program understands what a “pile of random stuff” should look like, but doesn’t know quite enough to recreate accurate images of specific items. This is an interesting demonstration of the current capabilities and limitations of AI technology. Essentially, AI programs mimic a real brain’s ability to learn. These programs can glean information from outside sources (like Google images) or from interactions with people. They then use this information to do things like create composite images, design simulations, and to mimic human conversation, whether text-based or audio.

Only relatively recently have AI programs been in common usage, but this technology now plays a large role in our everyday lives. Obviously, there are devices like Siri and Alexa that are capable of actual human conversation, and self-driving cars are now becoming a reality. Technically, things as mundane as recommendations on Netflix or Amazon are examples of AI, and AI programs are used for simulations and analysis in areas such as marketing, finances, and even health care. Recently, medical researchers have found that AI is creepily accurate at predicting death. And science is continually coming up with ways to advance AI technology. (I admittedly don’t understand this article explaining that magnets can make AI more human-brain-like, but it’s interesting anyway) 

In the interest of time, I’m going to end this blog post here. It’s gotten very long even though I’ve cut out probably about half of the stories I’d intended to include. If all goes as planned, I’ll have another one of these to post in a couple weeks.

On Personality Types

1 Comment

Myers Briggs 2Every few months, it seems that the people of social media collectively rediscover their love for the Myers-Briggs model of personality types. One day, people are posting about politics and television shows and kittens, and then suddenly the next day, it’s all about why life is hard for INTPs or 18 things you’ll only understand if you’re an ESTJ. (I, for the record, am apparently an INFJ) There’s just something about the Myers-Briggs Type Indicator that is fun, interesting, and at least seems to be extremely helpful. For all of the critical things that I’m about to say about it, I admittedly still try the quizzes and read the articles and find it all very interesting. I don’t think it’s total nonsense, even if it isn’t quite as informative as many people think. I should also acknowledge that there’s some difference between the quick internet quizzes and the official MBTI personality assessment instrument, which should be administered by a certified professional and will cost some money. However, the internet quizzes use the same personality model with all the same terminology, and they usually work in a similar way, so I think it’s fair to treat the quiz results as a pretty good guess of your official MBTI personality type.

I personally think that one of the biggest reasons for the appeal of the MBTI and other personality categorization models is that human beings have an inherent tendency to categorize people. It’s why we like to talk about astrology signs, Harry Potter houses, and whether we were a jock or a nerd in high school. I suspect it’s also the main reason that so many people are so passionate about their favorite sports team. When you fit into a clearly defined group, it means automatic camaraderie with the other members of the group.

When personality traits are the criteria for inclusion in a certain group, there’s another perceived benefit. If there is a specific number of personality types and you can identify which one describes you, then you can more easily find advice and information that are specifically relevant for you. To see what I mean, just google “advice for ENFP” or any other Myers-Briggs type. The internet search results are seemingly endless, and at least at a glance, it looks like most of those websites and articles and blog posts are legitimately offering practical advice.

MirrorWhoAmIWomanBefore editing this blog post, I had a paragraph here in which I got sarcastic about the concept that “knowing” yourself is the answer to everything. That was an unnecessarily lengthy tangent, but the point remains that the appeal of personality type models is the perceived promise of practical applications. It stands to reason that self-knowledge means recognizing your strengths and weaknesses, learning to make the right decisions for your own life, and improving your ability to communicate with others, especially if you know their personality types as well. Once you know what category you belong in, you can find personalized guidelines for all of these things.

Unfortunately, it isn’t that simple. There aren’t precisely sixteen distinct personality types; there’s an infinite array of personality traits. Every person is a little different, and what’s more, everyone changes. Not only does your personality change as you grow up and age, but also, if you’re like most people, your attitudes and behaviors vary a little bit from day to day. The very definition of “personality” is a psychological and philosophical debate that can’t be answered by a five-minute internet quiz. It can’t even be answered by a longer questionnaire with a registered trademark symbol in its title. Even if we were to oversimplify everything for the sake of argument and assume the continuity of personality, (which is more or less what I’m doing for the rest of this blog post) it’s a very complicated topic. There are entire graduate-level courses on theories of personality. I know this because I’m nerdy enough to actually browse graduate course descriptions just out of curiosity. The fact of the matter is that the Myers-Briggs Type Indicator is only one of numerous attempts to categorize and describe personality, and its origin is less academic and credible than most of the others.

Briggs and MyersAs a quick Google search can inform you, the Myers-Briggs Type Indicator was created by a mother/daughter team (Katherine Cook Briggs and Isabel Briggs Myers) and based largely upon their subjective observations. Upon discovering the works of pioneering psychologist Carl Jung in the 1920s, they essentially blended his theories with their own. The MBTI as we know it today was created in the 1940s and reached the awareness of the general public when Isabel Myers self-published a short book on it in 1962.

For those who aren’t already familiar with it, the MBTI is made up of four dichotomies. A personality type consists of either “extroversion” or “introversion”, either “intuition” or “sensing”, either “thinking” or “feeling”, and either “judging” or “perceiving”. Since there is no “both”, “neither”, or “in the middle”, there are sixteen possible combinations. Some are significantly more common than others, which I find interesting because my personality type is supposedly the most rare. 

Although Katherine Briggs and Isabel Myers did a lot of reading on the topic of personality, and both women were quite well-educated by the standards of their time, neither had formal education in psychology, and their theories were not studied and tested very extensively. For those who are interested, the Myers & Briggs Foundation website describes the development and early uses of the MBTI, but I notice that the purpose has always been just to categorize people, not to evaluate the usefulness of the MBTI itself. It doesn’t appear that anyone ever thought to analyze data and see whether Myers-Briggs personality types can predict or correspond to any other data. 

Personality book

I think that this is the book I recall reading as a teenager.

The Five-Factor Model, also known as the Big Five or as the OCEAN model, was developed by using factor analysis on survey data. It dates back to the early 1980s and should be largely attributed to researchers Robert McCrae, Paul Costa, and a little bit later, Lewis Goldberg. However, there were multiple studies conducted by different organizations that produced the same results. Researchers started with a long list of words used to describe personality, surveyed lots of participants on their own personalities, and used statistical calculations to determine which personality traits tend to be lumped together, resulting in five different categories of traits. The end result admittedly looks fairly similar to Myers-Briggs types, except that there are five variables instead of four. However, the simple fact that these five variables were determined by statistical analysis makes it far more science-based than the Myers-Briggs test.

Big Five 1The “Big Five” personality traits are called Openness, Conscientiousness, Extroversion, Agreeableness, and Neuroticism. (When I first read about this, they were ordered differently. I suspect they were rearranged specifically for the sake of the OCEAN acronym.) Each is a spectrum rather than a dichotomy as in the Myers-Briggs model. For example, I’m high on the Conscientiousness, Agreeableness, and Neuroticism scales, but in the middle on Openness and very low on Extroversion. I’ve seen multiple internet articles that describe this model completely inaccurately, so I want to stress that there are five factors, not five personality types. The Five-Factor Model doesn’t sort people into just a few categories the way the MBTI does because it acknowledges a middle ground. Depending upon which questionnaire you use, your results for each of the five factors might be numbers on a scale between 1 and 100, or they might be phrased as “very high”, “high”, “average”, “low” or “very low”. Either way, your personality includes a ranking on each of these five factors. This is one of the things I like about the Five-Factor Model. The stark dichotomies of the Myers-Briggs model might be convenient for categorizing people, but they sure don’t accurately portray the nature of personality.

Lexical hypothesisThere’s concern that even the Five-Factor Model is too subjective and unscientific. It’s based on the lexical hypothesis, which is the concept that personality is so central to the human experience that any language will necessarily develop the terminology to accurately describe and define it; therefore, making lists of words is a perfectly reliable and objective starting place for analyzing personality traits. While I personally find that idea fascinating and very plausible, it obviously leaves some room for doubt and criticism. Does language really reflect the nature of humanity so accurately and thoroughly that we can rely on linguistics to discover truths about psychology? Maybe, but it’s not very scientific to assume so.

To get a really clear, specific, and objectively scientific idea of what “personality” is and how it differs from person to person, we’d have to study personality from a neurological perspective. We’d have to consider whether there’s something about a person’s brain structure or neurological processes that make them more likely to behave or think in a certain way. My understanding is that neurological theories of personality are currently in the works, although I’m only finding information about studies that are still based on survey data. Still, it’s plausible that we’re only a few years away from being able to describe personality in terms of brain function. I find that a lot more compelling than a couple of Victorian ladies speculating about why Joe Schmoe acts nothing like Mr. So-and-so.

I’m far from the only one out there to complain about the prevalence of the Myers-Briggs description of personality. Lots of people, many of whom are more educated than I, have pointed out the lack of scientific data backing the Myers-Briggs test. But that raises another question. The Myers-Briggs Type Indicator has been around for a very long time, and it’s been discussed quite a lot, even in academic contexts. So why hasn’t it been put to the test? Why not do a study in which participants take a questionnaire (or perhaps self-identify their Myers-Briggs type) and then complete certain tasks or engage in certain social activities with one another, while researchers observe? The study would evaluate a few clear, objectively measurable hypothesis based upon things like which personality types will talk more than others, which personality types will perform better on various types of logic puzzles, or which personality types will be the best at remembering details about the room in which they took the personality questionnaire at the beginning of the study. The results of the personality questionnaires, of course, will be confidential until after the observation portion of the experiment. The researchers mustn’t know which subjects belong to which personality types. They will just take note of each individual subject’s actions and interactions, then bring the personality type into the equation later, when it’s too late for observational bias to rear its head. In theory, if the Myers-Briggs personality types are as reliable and clear-cut as people claim, then there would be extremely strong correlations between personality types and the behaviors and cognitive traits measured by this test.

BrainI’ve seen some articles that do claim that the Myers-Briggs Type Indicator is evidence-based, but so far, I can’t find any actual research cited. I expect that there probably has been some research done at some point, but nothing that shows beyond a doubt that a person’s Myers-Briggs personality type is useful for predicting behavior, analyzing strengths and weaknesses, or making decisions. The skeptics who compare the MBTI to astrology are not entirely wrong. Personally, my expectation is that any kind of objective analysis would indeed validate the idea that there are different personality types, and that a person’s personality type has some correlation to their behaviors or cognitive patterns, but that the correlations won’t be as strong as people would expect, and that the Myers-Briggs would prove to be less precise than other models based on factor analysis. I’m looking forward to the further developments that are sure to come along soon now that we’re seeing such advances in neuroscience.

Why Base Twelve Would Be Awesomer Than Base Ten

2 Comments

If you think about it, it’s silly that people count in base ten. Yes, it’s convenient because we happen to have ten fingers, but it’s inconvenient in numerous other ways. For example, although 1/2 and1/4 and 1/5 and 1/10 can be easily expressed by decimals, other common fractions like 1/3 and 1/6 and 1/8 are just weird if you try to write them in any other form. To give a less abstract example, the amount of time that I spend at work is measured according to a decimal system. That means that each 0.01 hour of work is 36 seconds, which is a kind of random unit of time. On some level, the human race is clearly aware that units of twelve are logical. The year has twelve months, a foot has twelve inches, and many products are sold in groups of twelve. Yet we still insist upon counting in base ten.

handsI think it says something about the selfish nature of humanity that we just assume that numbers are meant to be used in base ten simply because we have ten fingers. The human hand, according to our subconscious thought process, is clearly the standard by which we are supposed to measure everything in existence. No source of authority and no rational point of view outranks the supremacy of the human hand. Or something like that. But, mathematically speaking, there are better ways to count.

The short and simple way to say this is just to insist that base twelve is better than base ten because twelve has more factors than ten. But I’m going to back up a couple steps and ramble about some other things first. In all fairness, I must acknowledge that there’s a book I’m currently reading (How Acceptance of a Duodecimal (12) Base Would Simplify Mathematics by F. Emerson Andrews, copyright 1935 and 1944) that basically says everything that I’m saying in this blog post, and I’m sort of drawing from that book in writing this. But I also would like to point out, just for the sake of being a know-it-all, that none of the information or ideas I’m repeating here were new to me. These were all things I had heard, read, and thought about a long time before I happened to notice that book on the library bookshelf and was drawn in by its awesomeness.

The first thing about which I want to ramble is that even the tally mark system is pretty cool. We couldn’t count very high if it wasn’t for the clever construct of splitting numbers into handy units. If you count on your fingers, you only have two sets of five at your disposal, and you’re going to lose count pretty quickly once you get past ten. And if you try to count by writing down one mark for every unit, that’s not going to improve matters much. But by sorting those individual units into groups of five and then counting fives, you can count an awful lot higher. There’s no particular reason that five has to be the base used or that the notation method has to be tally marks as we know them; it’s the system of individual units and larger group units that is so clever and useful. Even though we take that system completely for granted, it’s pretty awesome when you think about it.

Roman PIN numberEarly forms of number notation were basically always tally-mark-type systems. Even Roman numerals are really just a glorified form of tally marks. You’ve got the individual unit written as I, the group of five units written as V, the group of ten units written as X, and so forth and so on. As an added bonus, numbers could be written in a more concise way by putting a smaller numeral in front of the greater numeral to indicate that the smaller unit is to be subtracted from the bigger unit rather than added onto it. For example, nine isn’t IIIIIIIII or VIIII because that’s kind of hard to read. It would be easy to accidentally confuse VIIII with VIII. So nine is IX, which means I less than X. So the Roman numeral system definitely had its benefits, but it still is of the same caliber as tally mark systems, and it still is really bad for doing arithmetic. (Quick, what’s MCDXXIV plus XXVII?)

But then the world was revolutionized by the numeral zero, which is the awesomest thing ever invented by humanity with the possible exception of that time when some random person thought of the idea of grinding up coffee beans and filtering hot water through them. Of course, the concept of “none” had always existed and there were ways of expressing the quantity of “none” in words. But there was no numeral zero as we use it now, and so place value didn’t work. It’s difficult to attribute the origin of zero to a specific time or place, because various cultures had various different ways of mathematically denoting zero-ness. But the significant advancement was the use of place value that was made possible by the use of the numeral zero, and that came from India and then gradually became commonly used in Europe during the medieval period. It wasn’t until the 16th century that the current system for writing numbers finished becoming the norm.

I think we can all agree that the Hindu-Arabic number system is much easier to use than Roman numerals. It’s easier to look at 1040 and 203 and know right away that they add up to 1243 than to look at MXL and CCIII and know that they add up to MCCXLIII. And it isn’t hard to add 48 and 21 in your head and get 69, but adding XLVIII and XXI to get LXIX is a little messy. A numerical system that relies on place value is inherently simpler to use than a system that doesn’t.

But there’s still that whole thing about base ten. To say that we count in base ten means that ten is the number that we write as 10. 10 means one group of ten plus zero ones. 12 means one group of ten plus two ones.  176 means one group of ten times ten, seven groups of ten, and six ones. But if, for instance, we counted in base eight, then 10 would mean one group of eight and no ones, which is 8 in base ten. 12 would mean one group of eight and two ones, which is 10 in base ten. 176 would mean one group of eight times eight, seven groups of eight, and six ones, which is 126 in base ten. If that sounds complicated, it’s only because we’re so used to base ten. We instinctively read the number 10 as ten without even thinking about the fact that the 1 in front of the 0 could refer to a different number if we were counting in a different base.

I’m not really advocating for getting rid of base ten, because it would be impossible to change our whole system of counting overnight. It took centuries for Hindu-Arabic numerals to replace Roman numerals in Europe, and switching to a different base would be an even bigger overhaul. Base ten is a very familiar system and it would just be confusing for everyone to suddenly change it, not to mention the fact that everything with numbers on it would become outdated and mathematically incorrect. So I’m perfectly content to stick with base ten for the most part, but I still think it’s worth pointing out that base twelve would technically be better. And this brings me to my actual point, which is why exactly base twelve is the best of all possible bases.

It goes without saying that the only feasible bases are positive integers. But I’m saying it anyway just because I am entertained by the notion of trying to use a non-integer as a base. It is also readily apparent that large numbers don’t make good bases. Counting and one-digit arithmetic are basically learned by memorization, and the larger the base is, the more there is to memorize. But small numbers don’t make good bases, either, because it requires a lot of digits to write numbers. Take base three, for instance. Instead of calling this year 2013, we’d be calling it 2202120. (Disclaimer: it’s entirely possible that I made an error. That’s what happens when I use weird bases.) And it wouldn’t be a good idea to use a prime number as a base. Even though I happen to be fond of the number seven and have said before that the people on my imaginary planet count in base seven, I realize that’s weird. (That is, counting in base seven is weird. It’s completely normal that I have an imaginary planet that uses a different mathematical system.) In base ten, we have a convenient pattern; every number that ends in 5 or 0 is divisible by 5, and any number that doesn’t end in 5 or 0 is not divisible by 5. That pattern works because 5 is a factor of 10. Using a prime number as a base would complicate multiplication and division because we wouldn’t have useful patterns like that.

So the numbers that would work relatively well as bases are eight, nine, ten, and twelve, and maybe six, fourteen, fifteen, and sixteen, if we want to be a little more lenient about the ideal size range. Eight and sixteen win bonus points for being 23 and 24, which is nice and neat and pretty, and nine and sixteen win bonus points for being squares. (Squares are cool, y’all) But twelve is the real winner here, because its factors include all of the integers from one to four. That means that it’s easily divisible by three and four as well as by two, and a multiplication table in base twelve would have lots of handy little patterns. Every number ending in 3,6,9, or 0 would be divisible by 3; every number ending in 4, 8, or 0 would be divisible by 4; every number ending in 6 or 0 would be divisible by 6. All multiples of 8 would end in 4, 8, or 0, and all multiples of 9 would end in 3, 6, 9 or 0. As in base ten, all even numbers would end with an even digit and all odd numbers would end with an odd digit. And obviously, every number divisible by twelve would end in 0.Basically, base twelve has the most convenient patterns of any base in the feasible size range.

Base Twelve Multiplication TableTo prove its convenience, I made this multiplication table myself rather than copying the one in the aforementioned book. (For the record, X refers to ten, because the notation 10 now means twelve, not ten, and ε refers to eleven, because the notation 11 now refers to thirteen, not eleven. I got those additional digits from the book. Part of me wanted to make up new ones, but there was some logic to the way it was done in the book, so I decided to just go with that.) I did double check it against the book just to be sure, and I suppose I ought to confess that I made a couple errors in the 5 and 7 columns. 5 and 7 are a little problematic in base twelve in the same way that 3 and 4 and 6 and 7 and 8 are a little problematic in base ten. But this didn’t take me very long at all to do, and the columns for 2, 3, 4, 6, 8, 9, X, and ε were extremely easy. Since basic arithmetic isn’t exactly a great strength of mine, the fact that I found it easy to construct this multiplication table proves the mathematical ease of arithmetic using base twelve.

So, yeah, base twelve is cool and stuff.

Eyes of the Kitten

5 Comments

Romana's eyesMy cat Romana has beautiful eyes. If I had to describe them as being a certain color, I’d call them yellow, but they’re actually multi-colored and the shades vary. Sometimes they’re greenish blue near the pupil, then gradually range to greenish yellow, and finally yellow at the outside edge of the iris. Other times, they’re completely yellow, but the center is vaguely greenish and the outer part is orangish. They’re very fascinating eyeballs. And they make me wonder, why don’t cats have the same eye colors as humans? Why is it possible for cats’ eyes to change colors so much more drastically than humans do? And why does Romana have multiple eye colors?

So, of course, I googled it.

Bo and his yellow eyes

Bo and his yellow eyes

It is common knowledge that cats’ eyesight works differently than humans’. They can see in the dark, they are more nearsighted than people are, and their perception of color is less precise than that of humans. (Contrary to common belief, it is not true that cats or dogs can’t see color, but it’s true that they cannot distinguish nearly as many shades of color as humans can, particularly on the red end of the spectrum.) All of these differences are due to the fact that the anatomy of a cat’s eye is slightly different than that of a human’s eye. The most visibly obvious differences are that a cat’s eye is larger in proportion to its head than a human’s eye is and that the cat’s pupil varies more in size. This second property is related to the fact that cats’ pupils are slit-shaped rather than round, and it is one of the main reasons that cats have good night-vision. The other reason is that cats have a reflective layer behind the retina, which is called the tapetum lucidum. The reflective property of the tapetum lucidum is the reason that cat’s eyes glow in the dark, and that the pupil sometimes appears to glow green. Many other types of animals, including dogs, also have this layer, but humans do not.

Heidegger and her yellow eyes (which have more of a green tint than Bo's do)

Heidegger and her yellow eyes (Although you can’t see it in this picture, they have more of a greenish tint than Bo’s do)

All of these facts were the things that showed up on my google searches, but it took a little while longer to find information about eye colors. I know that eye color is determined genetically and has to do with pigmentation, and it makes logical sense that different species have different genes, but I was looking for an answer that was a little more technical and specific than that. I wrote a few paragraphs earlier today, which I later deleted, which speculated about the shape of a cat’s eyeball. Based upon various diagrams I found, it does appear that cats’ eyes have a greater space between the cornea and the iris than people’s eyes do. This is interesting, and it verifies something that I had already guessed, based upon the fact that the surface of a cat’s eye appears transparent when you see the cat’s face in direct profile. But I was just randomly speculating when I thought that might affect the appearance of the iris. The reason that I deleted those paragraphs, and instead summarized that information in just a couple sentences, is that I realized that it was probably wrong when I finally found more information about the pigments that determine eye color.

This lovely kitten lived on my college campus. I think I took this picture in May my sophomore year.

This lovely kitten lived on my college campus. I think I took this picture in May my sophomore year.

There are two types of these pigments.  One, melanin, determines how dark the eye is.  People with a lot of melanin in their eyes have brown eyes, while people with less melanin have blue, green, gray, or hazel eyes. The other type of pigment is called lipochrome. Lipochrome is yellowish, and people with a lot of lipochrome in their eyes will have green eyes while people with very little lipochrome have brown, blue, or grayish eyes. The amounts of these two pigments are determined by two different types of genes, and their combination defines the shade of eye color. For example, hazel eyes have a bit more melanin and a little less lipochrome than green eyes. As a side note, violet eyes occur when there is absolutely no melanin in the iris, which means that light actually reflects off the blood vessels in the retina. The purple color is a combination of the colors of the blue iris and the red blood vessels, and it is extremely rare.  This reflective phenomenon is also the cause of red eyes in photographs.

Much less research has been done on the pigmentation of cat eyes, but it is my guess that cat eyes have the same pigments, just in different quantities. Based upon the eye colors that are common in humans and cats, it would seem that cats, in general, have less melanin and more lipochrome in their irises than humans do. This explains why human eye colors are most often blue or brown, and the most common cat eye colors are yellow and green.

A picture of an odd-eyed cat that I got on Google, since I don't know any odd-eyed cats personally.

A picture of an odd-eyed cat that I got on Google, since I don’t know any odd-eyed cats personally.

Odd-colored eyes (which are more common in cats than in humans) obviously occur when the eyes have different amounts of pigment. Multi-colored irises, (like those in my Romana’s eyes, or the eyes of the actor Baconstrip Cucumberpatch, who plays Sherlock and who was Khan in the new Star Trek movie) evidently are caused by an uneven distribution of pigment throughout the iris. Based upon my observations, it would seem that lipochrome is more likely to be uneven than melanin. Green and yellow eyes are more likely than blue or brown to be flecked or to consist of a spectrum of shades

Pictured: the aforementioned eyeballs of Buttermilk Colorswatch. You know, that actor who plays Sherlock and who was Kahn in the new Star Trek movie.

Pictured: the aforementioned eyeballs of Buttermilk Colorswatch. You know, that actor who plays Sherlock and who was Kahn in the new Star Trek movie.

It is also worth noting that, strictly speaking, a person’s eye color doesn’t change according to mood or what they’re wearing. When the color of the iris appears to change, that is actually an effect of the lighting, which could be affected by the change of facial expression. The iris itself is a consistent color, except in the eyes of very young babies and eyes that have suffered some kind of physical trauma. However, cat’s eyes really do change colors drastically according to mood; I’ve seen it quite a lot. I’ve known many yellow-eyed cats whose eyes will definitely gain a greenish tint when they’re calm and relaxed. Romana’s eyes, which are more of a greenish color in the first place, sometimes get bluish when she’s very content and half-asleep. It’s undeniable that this happens, and I found a reasonable explanation for how it’s possible. The internet informs me that there is some pigmentation on the tapetum lucidum, that reflective layer in cats’ eyes that people don’t have. Cats tend to narrow their eyes when they are in a peaceful and happy mood, and I think it seems perfectly plausible that this could alter the position or angle of the tapetum lucidum, perhaps causing its color to show through the iris less clearly, resulting in a lower-lipochrome color.

Romana

Romana

According to this information, it would seem that the answer to my original question about Romana’s distinctive eyes is that she has a lot less melanin and a lot more lipochrome in her eyes than humans do, and that in her particular case, the lipochrome is concentrated more around the outer edge of her iris and less in the area of the pupil.

The Confusing Thing about Random Facts

1 Comment

Every now and then, I see something on facebook, tumblr, or some other sector of the internet that asks participants to list a certain number of random facts about themselves. I rarely do this. It is not necessarily because I believe such surveys to be cliché or pointless; it is rather because I am confused about what determines the randomness of a fact. Does “random”, in this context, mean trivial? Does it mean that the person constructing the list is not to spend much time thinking about the facts or putting them in any particular order? Does it simply mean that the facts do not necessarily need to be related to each other in any way? My observations of things that other people post on the internet has led me to come to the conclusion that the word “random” is defined in many different ways, and that the tone and nature of a list of “random facts” will differ greatly from individual to individual. Some examples of things that can be classified as “random facts” include personal anecdotes, opinions, self-descriptions of personality or physical traits, details about one’s family or pets, personal biographical information, or a detail about one’s hobbies or interests. It would seem that the entire point of “random fact” lists is that everyone has a different idea of what kinds of facts should be on these lists. You may not learn a lot about a person by what facts about themselves they choose to share in this type of context, but what you do learn about them might be interesting.

As a nerd and a smart-aleck, I am incapable of simply accepting this. The reason for my objection is that the word “random” has a specific mathematical meaning. Granted, this mathematical meaning differs from the word’s standardly used definition as given by English dictionaries, which say that “random” means purposeless or haphazard. Normally, it is my policy to trust dictionaries. However, I believe that the official mathematical definition of the word “random” is worth noting. According to the statistics class I took a year ago (and in which I got good grades, thereby justifying my insistent use of the concepts and definitions I acquired from it), “random” means that any possible outcome has an equal chance of occurring. For example, the roll of a fair die is random because each of the sides has an equal chance of being the side facing up when the die lands. Picking a card out of a standard deck is random because each card has an equal chance of being selected. Using a random number generator is random because any number within the selected range has an equal chance of being produced. Listing facts about yourself is not random because, no matter how purposelessly and haphazardly you are doing it, you are deliberately selecting the facts that you will use.

The only way to make a list of facts random is to randomly select these facts from a larger list of facts. That is to say, in order to generate a short list of random facts, a person should first write a long list of facts and then use a completely fair method to randomly choose the predetermined number of facts from the long list. This raises another question, though. How comprehensive should the long list of facts be? It can’t actually contain every possible fact about the person in question, because such a list would be infinitely long. I think that might actually be literally true, but even if it isn’t, the list would be extremely long and therefore inconvenient to use. Thus, I suggest that the list should contain only those facts which the writer of the list deems internet-worthy based upon its coolness and the likelihood that others would be interested in reading it. For example, I might include some facts about my musical preferences or my favorite books or movies, but not a fact about my favorite brand of peanut butter. Alternatively, someone who considers peanut butter to be a fascinating or important topic might use such a fact.

It’s too bad I have a lot of homework to do tonight.  Otherwise, I would love to spend some time compiling a long list of facts about myself in order to prepare for the next time the internet wants to know some random facts about me.

Cadbury Eggs and Calculus

Leave a comment

Let us consider that the enjoyment of eating a Cadbury egg is 25 units of happiness, and the enjoyment of having a Cadbury egg to eat later is given by the function 20-3t units of happiness per day where t is the number of days since you acquired the Cadbury egg. This function decreases as t increases because of the increased risk that the Cadbury egg will melt.

a) Find the equation in terms of t that gives the total happiness coming from the ownership and consumption of a Cadbury egg.

Answer: 25 plus the integral of (20-3t dt)

b) If one eats the Cadbury egg at t=7, how much happiness has one gotten from the Cadbury egg over the week?

Answer: 25 + the integral of (20-3t dt) from 0 to 7=25+(20t-(3t^2)/2) evaluated at 7 and 0=25+(140-73.5)-(0)=91

c)If one eats the Cadbury egg at t=5, how much happiness has one gotten from the Cadbury egg over the five days?

Answer: 25+ the integral of (20-3t dt) from 0 to 5=25+(20t-(3t^2)/2) evaluated at 5 and 0=25+(100-37.5)-(0)= 87.5

d) When should you eat your Cadbury egg in order to maximize Cadbury egg enjoyment?

Answer: t=6 2/3, because that is the highest value of t for which (20-3t) is positive, and the integral of (20-3t dt) is therefore increasing.

e) When are you going to eat that Cadbury egg?

Answer: t=0.

Note to self: You’re really supposed to save Easter candy until Easter, you know.

Older Entries