Science News from May 2019

2 Comments

This is the second installment of what I hope will become a monthly series. I’ve been periodically checking a few different websites (mostly sciencedaily.com and sciencenews.org) and keeping an eye out for interesting science news stories. Although we’re already halfway through the month of June, this blog post only includes stories through the end of May. But I am in the process of collecting more recent content for next month!

My summary of April’s science news included a lot of studies about food and nutrition, so I’m going to start by following that up with this inconclusive study about whether highly processed foods cause weight gain. The article suggests a couple reasons for the lack of a clear yes/no answer. Nutritional needs and metabolism vary from person to person, and it’s still unclear to scientists what all of the variables are. Besides that, nutrition is hard to study because an accurate scientific study requires a controlled environment and detailed data collection, which means there’s a disconnect from real-life eating habits. This article mentions the possible effects of “social isolation, stress, boredom, and the fact that foods are prepared in a laboratory,” but that barely scratches the surface of the possible confounding variables. There’s also the possibility that participants’ eating habits, amount of exercise, or even their metabolism is affected simply by the knowledge that they’re part of a study on nutrition. Here’s another recent study that didn’t confirm a common nutrition “fact”. It would appear that dietary cholesterol doesn’t really cause strokes. The takeaway here is that nutrition is still a relatively new field of study and there’s a lot more to learn. (On a side note, though, apparently blueberries are good for blood pressure)

ChocolateMeanwhile, the University of Edinburgh has been asking the big questions and perfecting the chocolate-making process. And in Munich, they’re studying the scent of dark chocolate. They’ve identified 70 different chemicals whose odors combine to create the distinctive smell of dark chocolate, although only 28 to 30 are really detectable.  And as long as we’re talking about scents, another study showed that people who drink coffee are more sensitive to the smell of coffee. 

Another topic that played a big role in my blog post from last month was artificial intelligence. I have another update to add in that area, too. In the ongoing quest to make AI as similar to the human brain as possible, researchers have noticed that machines with an artificial neural network (as opposed to a conventional computer model, which rely entirely on algorithms and can only “think” sequentially) can have a human-like number sense.

dice-clipart-fiveIf you aren’t entirely sure what that means, let’s use the image on the right as an example. How many dots are there? You probably noticed that there are five dots as soon as you scrolled down far enough to see it, even before you read these words that tell you why this image is here. But you probably didn’t look closely at it and consciously think the words, “One, two, three, four, five.” As quick and easy as it is to count to five, it’s even quicker and easier to just visually recognize the pattern and know that it illustrates the number five. Your brain is capable of doing that without actually counting.  You’re also capable of looking at two different images with a cluster of dots and instinctively knowing which one has more without actually counting. (There’s some debate about whether that’s the exact same skill or just a related skill. My opinion is that it’s different, but there’s obviously a connection)

As I’ve tried to look up more information on visual number sense, I’ve increasingly realized that there are other debates on the topic as well. There’s a variety of questions and opinions about how it works, whether it varies from person to person, and whether it’s an inherent, innate skill or an or acquired skill. But based upon what we know about how people learn to read, and also based upon what this new AI story demonstrates, I think it’s pretty clear that this is an example of neuron specialization. You literally use different neurons to recognize the five-ness of this image than the neurons you would use to recognize a different number. Think of a child learning how to read; first he or she must learn to recognize each letter of the alphabet as a distinct symbol and understand that each one makes a different sound, but then he or she has to learn to do so very quickly in order to be able to comprehend the meaning of whole words and sentences. To become a proficient reader, the child must eventually learn to recognize whole words instantaneously. This learning process usually takes at least three or four years because it actually requires changes in the brain. Not only does it necessitate close cooperation between the neural networks used for vision and conceptual comprehension, it also requires specific neurons to specialize in identifying specific visual cues, such as the letter A or the word “the”.

I could ramble for a while longer about that (I am a children’s librarian, after all) but I’ll leave it at that because my point is just that it makes sense that number recognition works similarly. But it’s a lot easier. The concept of “five” is much more intuitive than the concept that a particular arrangement of squiggles corresponds to a particular grouping of sounds which in turn corresponds to a particular thing or idea. I’m not sure that AI would be capable of learning to read; a computer only comprehends text insofar that it’s been programmed to recognize certain letters, words, or commands. If a programmer makes a typo and leaves out a letter or punctuation mark, the computer doesn’t recognize the command. But based upon this new story about AI number sense, a computer with an artificial neural network can indeed use a process akin to neural specialization to develop human-like visual number recognition.

That might not seem like a scientific advancement, because after all, the one advantage that computers have over human brains is their ability to work with numbers almost instantaneously, whether that means counting or arithmetic or more advanced mathematics. But it’s certainly an interesting story because it validates the similarity between an artificial neural network and an actual human neural network. Also, it gives me an excuse to nerd out about neural specialization and literacy acquisition, which is the real point here.

ToddlerBut speaking of small children, a new study from Massachusetts General Hospital has found what countless other studies have also shown: Early childhood is a very formative phase of life. It has been common knowledge for a while now that personality traits, social skills, intelligence, and even academic potential are mostly determined by the age of five. This particular study was looking at the impact of adversity such as abuse or poverty and it evaluated this impact by looking at the biochemistry of epigenetics rather than behavior or self-reported psychological traits. (Epigenetics describes things that are caused by changes in gene expression rather than differences in the genes themselves. In other words, genetics determine what traits or disorders a person is predisposed to have and epigenetics determine whether that person actually develops those traits or disorders.) Data was gathered from a longitudinal (long-term) study that has been collecting data including both DNA methylation profiles and reports from parents about a variety of factors related to health and life experiences. Predictably, researchers found the greatest correlation between life experiences and DNA methylation changes when the child was under the age of three. 

Other interesting stories about neurology and psychology include one study about the brain processes involved in decision-making, another study that identifies the part of the brain responsible for how we process pain and use it to learn to avoid pain, a study showing that (at least among ravens) bad moods really are more contagious than good moods, and finally, some new information that may help explain the cause of autism. (Spoiler: it’s certain genetic mutations) I’m just sharing the links rather than the full stories here in the interest of time, but there’s some fascinating stuff there.

DogHere’s another story about genetics, although this one is really just a fun fact. It would appear that your genes determine your likelihood of having a dog. Apparently, this study didn’t look at other types of pets, but I’d be interested to know if this means that pet preference is genetic in general. The study, or at least this article about it, seemed to be more interested in the anthropological aspect of dog ownership because it talks more about the history of the domestication of dogs than about the relationship between humans and animals in general. Another question that I have is how the researchers accounted for the possibility that it’s the early childhood environment and not genetics that determines pet preference. I am sure that my love for cats was initially due to the fact that I was practically adopted at birth by a cat and he was a very significant (and positive) aspect of my early childhood experience. Although this is just anecdotal evidence, I have noticed that many cat lovers grew up in households with cats and many dog lovers grew up in households with dogs. But I digress. 

I seem to have already established the pattern of focusing on nutrition and neurobiology over all other kinds of science, but I do have a couple other stories to mention. For one thing, artificial intelligence isn’t the only way in which technology is learning to replicate nature. Now we’ve got artificial photosynthesis, too. We’ve also got some new planets. Eighteen of them, to be exact! But don’t worry; I don’t think anyone is expecting us to memorize their names. They’re not in our solar system. And here’s one final bit of science news: As of May 20, the word “kilogram” has an updated definition. The newly defined kilogram is almost precisely equal to the old kilogram and this change will not have an effect on people’s everyday lives, but the metric system’s measurement of mass is now based upon a mathematical constant (Planck’s constant, to be specific) rather than on an arbitrary object. (A metal cylinder called Le Grand K, which is kept in a vault in France) 

So that’ll be it for now. Coming up next time (depending upon what I may find between now and then that’s even better) are some stories about the Mona Lisa, pentaquarks, and developments in weather forecasting.

 

Advertisements

Really Awesome Fun Things That I Would Do If I Had Time On My Hands

1 Comment

I should probably start by acknowledging that, when I say “really awesome fun things,” I mean what other people mean when they say, “weird, pointless, and nerdy things.” In fact, people often respond to my “really awesome” ideas by giving me a strange look and saying, “But… why?” And the only answer I have for that is, “Because… awesomeness.” So keep that answer in your mind as you read this list and think, “But…why?” about everything on it.

Number One: Codify the language used on my imaginary planet

Here is the Cherokee syllabary.

Here is the Cherokee syllabary.

On my imaginary planet, they use a language that, unlike English and other Indo-European languages, has a syllabary rather than an alphabet. That means that each syllable is represented by a symbol. This system is not unique to the people of my planet; it is used in some Earth cultures, most notably Japanese and Cherokee. But it is much less widespread than a phonetic alphabet because it tends to be inefficient and more complex. That is, that’s the way it works on Earth. On my imaginary planet, they use a syllabaric language just because I personally think it would be more fun to make up. It actually won’t be too complex because there are only 100 different syllables in their language, and when I say 100, I mean 49, because they count in base seven. The 49 one-syllable words are one-digit integers, pronouns, articles, conjunctions, and prepositions. Two-syllable words are adjectives and adverbs.  Three-syllable words are verb roots, (with a fourth syllable suffix determining tense, mood, and aspect) and five-syllable words are nouns. That allows for a vocabulary of as many as 10,001,010,100 words counting in base 7, which is 282,595,348 in base 10. (I should perhaps acknowledge at this point that there is a significant possibility that my math is wrong, because that is a thing that does happen sometimes.) Considering that there are approximately a million words in the English language, (an exact count would be impossible due to the nature of linguistics) it is safe to say that my planet’s imaginary language would not exhaust its capacity for vocabulary. With the exception of verbs and nouns, this language would have a more limited number of words than most Earth languages, and it is my intention for the grammar to also be simpler and involve fewer exceptions to rules. That’s as far as I’ve gotten; I haven’t formed the syllabary or made up any vocabulary yet. Once I do that, the next step is to translate the entire Bible into my imaginary language. And of course, the translation has to be done from the original Hebrew and Greek, because it is vitally important that all of these imaginary people have a scripturally accurate Bible. (Note: This translation could take a while, because I currently do not know Biblical Hebrew at all and only sort of kind of know a little Biblical Greek.)

Number Two: Memorize lots of Pi

I am a little embarrassed to admit that all of Pi that I can remember is 3.1415. Actually, I thought I remembered a few more digits, but it turns out that I had the 9 and the 2 switched. I was right that the next digit after that was a 6, but that was as far as I could get. I used to know a lot more Pi; I think that at one point, I had about 40 digits memorized. Of course, that’s not extremely impressive because there are some extreme nerds out there who have Pi memorized to a bajillion places. But the point is that I want to be one of those extreme nerds because that seems like a fun skill to have.

Number Three: Be an Artificially Artificial Intelligence

I'm pretty sure that's more or less how Cleverbot works.

I’m pretty sure that’s more or less how Cleverbot works.

This game would make use of an anonymous and random internet chat program, of which there are several in existence. Before beginning, I would make a short list of random phrases. In the first chat, I would enter each of these phrases and make a note of how the other person responded. From that point on, anytime someone uses one of my original phrases, I would respond in the same way that person #1 responded. When chatting with person #2, I would use the phrases that had been typed by person #1 in chat #1. Once again, I would keep track of the responses for use in any later situation where someone types those phrases to me. Over the course of hundreds or thousands of chats, I would build up an extensive list telling me how to respond to things that people say. The longer I do this, the more my chat messages would begin to resemble an actual conversation with an actual person.

Number Four: Organize my wardrobe

This is what I need to do. I need to make a list of every non-underwear article of clothing that I own and determine which of them “go with” which others, so that I have a specific list of every outfit I have available. For each outfit, I shall then determine rules for when and where it can be worn depending upon factors such as degree of formality and suitability in cold or hot temperatures. Finally, I shall make a complicated and convoluted chart that tells me when to wear what. The point of this is not to simplify the process of getting dressed or to save time; the point is to have the fun of consulting a chart. Because that’s a very entertaining thing to do.

Number Five: Finish the mancala algorithm

Mancala Board(I use the word “finish” because this is a project that I have started before. See this blog post from June 2012.) When a game of mancala begins, the first player has six choices, and only one of them makes any sense. It is fairly self-apparent that the number of possible moves increases exponentially for each additional move being considered in the calculation, and that the number of good moves also increases to such an extent that there is a very wide variety of possible outcomes. However, the game of mancala is a lot simpler than, for example, chess or scrabble, so it seems that it should be feasible, although ridiculously time-consuming, to create an algorithm determining what the best series of moves is. One goal of this algorithm is to develop a strategy that will always win; another goal is to determine how early in the game it is possible to predict beyond a doubt who will win. As far as I can tell, the best way to develop such an algorithm is to play lots and lots and lots of mancala and try out lots of possible combinations of moves.  It isn’t literally necessary to play out every possible game, but it will be necessary to try out a lot of them, to try out various ways of continuing the game after various sets of opening moves, and to take a mathematical approach to the outcomes.

Number Six: Learn how to talk in Iambic Pentameter

It seems to me that the ultimate test of quick thinking is the ability to maintain a poetic meter and rhyme scheme in conversational speech. One would have to count stressed and unstressed syllables and think of rhymes all while concentrating on communicating whatever it is that one wants to say in the context of the given conversation. I’m not sure if such a thing would be possible, but it would be so totally awesome if it was.

Number Seven: Continue my experiments on whether putting your hands on your face helps you think

Many people, myself included, will sometimes put their hands on their face while they are thinking, and I am curious about why. In the past, I have made up experiments to test the intellectual effects of this gesture. (See these two blog posts from Summer 2012) These tests have obviously been inadequate to answer this question for various reasons. For one thing, they were conducted in the same way, which measured intellectual activity by memorizing a string of random digits. But memorization isn’t the only kind of thought. It seems to me that a strategic game is a more thorough test of effective thought. Chess is the ideal game for this experiment because it has no element of luck and is more intellectually stimulating than certain other games like checkers. (In case anyone is interested, I dislike the game of checkers and am always glad for an opportunity to say so.) The next experiment would involve playing consecutive online chess games, all using the same time limit, for many hours on end. During some games, I would rest my face on my hands while I think, and during other games, I would make sure not to touch my face at all. This experiment would have to be repeated several times on different days in order to decrease the risk of confounding variables. I imagine that I would need to play a few hundred games before calculating the results. Even then, these results would be meaningless unless I came up with further experiments which would involve other people and other methods of measuring intellectual activity.

Number Eight: Memorize cool movies

Star WarsThis one is pretty self-explanatory. It also is quite obvious that the first couple movies that I would memorize would be Star Wars and The Princess Bride. Others that would be high on the list would be the other Star Wars movies, Monty Python and the Holy Grail, The Hitchhiker’s Guide to the Galaxy, the Back to the Future trilogy, and The Matrix. You know, all those movies that cool people quote all the time.

Number Nine: Finish this list

This list is incomplete because there are a semi-infinite number of really awesome fun things that I would do if I had time on my hands. There are a bunch that I had intended to include in this partial list that have temporarily slipped my mind, and I’m going to go ahead and post this without them because what I have here is already sufficiently long. Then there are others that I thought of a long time ago and have completely forgotten, and many more that simply haven’t ever occurred to me yet. Just to finish the list would be an unachievable goal. But it would be entertaining to spend a lot of time working on it.

Why I don’t think that computers are about to take over the world…yet

Leave a comment

The blog post that I posted yesterday morning took a very long time to upload, because there was a thing on my computer that I had to hunt down and uninstall first. I call it a thing because, according to my virus check, it wasn’t actually a virus. In my opinion, though, it qualified as a virus because it got onto my computer without my permission, repeatedly brought up a pop-up message asking me if I wanted to install a certain toolbar, and in so doing, slowed down my internet to the point that it was completely useless. Apparently, the reason that my computer didn’t recognize it as a virus was that it wasn’t any kind of malicious spyware or a trojan or anything like that. All it wanted to do was to install that toolbar, and it wasn’t even being insidiously sneaky about it. But I didn’t want that toolbar and I definitely didn’t want to see that pop-up message every couple of seconds or to have my internet working in slow motion. So I found the problem and got rid of it, and it fortunately went away willingly as soon as I clicked the delete button.

Technically, I can’t really blame my computer. That sort-of-a-virus was presumably created by another person on another computer and snuck onto my computer uninvited because that’s the way it was designed to work. Still, it would be nice if the computer was capable of deciding for itself that it doesn’t need a random new toolbar. But it doesn’t work that way. My computer responds to these kinds of situations by saying to itself, “Ooh! A new toolbar! I must need it; I’m getting a message that says it’s important!” I respond to these kinds of situations by clicking on the red X in the corner because my brain functions well enough (just barely) to be aware that I don’t really want that toolbar even if some computer program insists that I do. I am capable of deciding for myself what I do or don’t want, but my computer doesn’t have the capability of making decisions, so it just believes whatever it’s told. If I’m not the one telling it to do things, that’s a problem.

Computers basically just do what they’re programmed to do. Even artificial intelligence is, as the name indicates, artificial. You can have conversations with an artificial intelligence computer program on certain websites that work more or less the same way as instant messaging with a person, except that there’s no actual person on the other side. The computer decides what to say based upon data that tells it what real people have said in response to certain types of phrases. I’m sure it’s an extremely complicated and clever algorithm, but anybody has the capability of messing with it by typing in random words and phrases instead of having a sensible conversation with it. When other users try to communicate with the computer program in the same way that they’d communicate with a person, they will get a lot of non sequitor responses. The computer is responding in the way that is logical according to its programming, which doesn’t take into account the fact that there are no rules or algorithms determining what real people can do with the system.

This is the kind of thing that can happen when you’re playing the computer. This picture comes from my brother and I did not ask for permission to use it. Sorry, Brother.

Every type of artificial intelligence has the same limitations. For example, one of the games that came on my computer is chess. At the lowest levels, it’s very easy to win because the game is apparently programmed to make stupid blunders every so often and to miss any clever tactics that take more than a couple moves to win material. The higher levels, of course, are more difficult, and I myself have never been able to beat them, but there are people who have discovered easy ways to defeat that program at the highest level in just a few moves, and they can make the same strategy keep working no matter how many times they do it. I know this because some of these people have made videos and posted them on youtube. Maybe it could technically be possible for that same exact game to be played between two human players, but it certainly wouldn’t happen many consecutive times unless the player who was losing was doing it on purpose. I am aware that there are chess computer programs that are more advanced and can’t be outsmarted so easily, but even then, they only work because some intelligent human being has programmed it to follow some kind of algorithm, and not because the computer itself has a sentient understanding of chess and the capability of thinking about the game as it plays.

 

Really, the only thing a computer can do that it hasn’t been told to do is to stop working. I have known of computers to lose internet access for no readily apparent reason, to fail to save documents, and to freeze for hours on end, but I have never known of a computer to come to the conclusion that humanity is inferior and to choose to destroy or enslave it. I’m not necessarily saying that such a thing is absolutely impossible, but it couldn’t happen anytime soon because computers would first have to develop some human characteristics such as the ability to follow a thought process (as opposed to blindly following an algorithm), a desire for power or control, and basic human stubbornness. As long as computers are gullable and stupid enough to want every toolbar that the internet offers them, they are clearly lacking in these traits, and I think humanity is safe from the threat of computers taking over everything and ending the world as we know it.

Come to think of it, though, The Matrix specifically says that artificial intelligence will take over the world in the early 21st century, the proponents of the Mayan apocalypse specifically predict the end of the world in 2012, and the weirdos with the signs that I saw at the Riverfest in Little Rock last month were very certain that the end is near. Perhaps they’re all on to something after all.