Science News from April 2019

Leave a comment

This is something I’ve been thinking about doing for a while now. The idea is that I’ll keep an eye out for recent studies or other science news and then post a monthly summary of the most interesting stories. Usually, the goal will be to post these at the beginning of the month because they’ll be about news from the previous month. But it took me a while to get around to doing this first one. I’m sticking to April 2019 news even though we’re already more than halfway through May.

I’ll be honest here; my real reason for starting this new blog post series is to look up random stuff online and call it “research” instead of “wasting time reading random articles”. However, it would also be great if someone else out there stumbles across my blog and learns something that they might not have otherwise seen. I should probably include a bit of a disclaimer acknowledging that I don’t have a background in science beyond a couple general education classes in undergrad. Also, I’m only including things that I personally find interesting, which means that the content will be skewed towards some topics more than others. For example, there’s going to be a lot more about neuropsychology than about ecology. This particular month, it’s a lot about food and brains. I encourage reader interaction; if you’ve come across some interesting science news that I haven’t included here, please leave a comment!

Black HoleThe biggest science news of April 2019 is a picture of a fuzzy orange circle against a black background. It’s been around for more than a month now, so the hype has faded a little, but you probably were seeing it all over the internet a few weeks ago, usually accompanied by the name and headshot of Katie Bouman, the 29-year-old computer scientist who played a key role in taking this picture. (In fact, this image is the result of many people’s efforts over the course of many years) But as much hype as that fuzzy orange circle is getting, it’s all well-deserved, because it’s a real photograph of a mysterious and fascinating phenomenon that we know and love from science fiction. We Earth people now have seen a black hole. 

A black hole, which is presumably caused by the collapse of a supermassive star, is an area of space with such a strong gravitational force that even light cannot escape from it. It’s actually the opposite of a hole; rather than being empty space, it’s an area of extremely condensed mass. The existence of such phenomena was suggested as early as 1783 by John Michell, a British academic whose writings and professions cover an impressive array of disciplines. (His various roles and titles at Cambridge alone include arithmetic, theology, geometry, Greek, Hebrew, philosophy, and geology; he was also a rector in the Anglican church and a relatively prolific writer of scientific essays) The idea of black holes didn’t get much attention until Albert Einstein’s general theory of relativity came along in 1915, describing how matter can bend spacetime and suggesting that such a thing as a black hole could exist. However, even then, the term “black hole” wasn’t in common usage until around 1964, and black holes basically belonged in the realm of theoretical physics and science fiction at that point and for a while afterwards.

If you look up timelines showing the advances in our knowledge of black holes, there are plenty of landmarks over the course of the last four or five decades, and some of these developments have resulted in images that show the effects of a black hole in some way, shape, or form. But the picture produced by the Event Horizon Telescope on April 10 of this year is the first actual photograph to directly show a black hole. The Event Horizon Telescope is actually several (currently eight) telescopes around the world, synchronized by the most advanced technology that computer science has to offer.

In other astronomy news, planetary scientists say that we’ll have a once-in-a-millennium encounter with an asteroid in about ten years, and it’ll happen on Friday the 13th. (April 13, 2029, to be exact) We’re told there’s no cause for concern; it won’t hit the Earth. This asteroid is just a fun fact, so let’s move on to the extremely important topic of food.

Greek saladPeople have been noticing for a while that the “Mediterranean diet” seems to be healthier than the “Western diet”. Although there are some organizations out there that have put forth very specific definitions of what constitutes the “Mediterranean diet,” the basic gist is that American food includes more animal-based fats, whereas the cuisine in places like Greece and southern Italy has more plant-based fats, especially olive oil. Proponents of the Mediterranean diet often point out the significance of cultural factors beyond nutrition. Our eating habits in America tend to prioritize convenience over socialization, while the idyllic Mediterranean meal is home-cooked, shared with family and friends, eaten at a leisurely pace, and most likely enjoyed with a glass or two of wine. I mention this because it has been suggested that a person’s enjoyment of their food actually impacts the way in which their body processes the nutrients. In other words, food is healthier when you enjoy it.

In this particular study, that factor wasn’t taken into consideration and probably didn’t play a role. (The socialization aspect and the wine consumption weren’t mentioned) But researchers did establish that monkeys who were fed a Mediterranean diet consumed fewer calories and maintained a lower body weight than monkeys who were fed an American diet, despite the fact that both groups were allowed to eat whatever amount they wanted. The implication is that the Mediterranean diet, and perhaps plant-based fat specifically, is the key to avoiding overeating. 

On another nutrition-related topic, it turns out that protein shakes aren’t as great as many people think. While it’s true and well-established that a high-protein, low-carb diet is effective for building muscle mass, there are drawbacks. Of course, general common knowledge has long dictated that a varied and balanced diet is good, but it turns out that too much reliance on supplements can actually be dangerous in the long run. Essentially, protein supplements can negatively impact mood, lead to a tendency to overeat, and eventually cause weight gain and decrease lifespan. Even if you’re a bodybuilder, you’re better off getting your protein from regular food than from protein drinks and protein bars. 

CoffeeLet’s move on from foods to beverages. Scientists have suggested that taste preferences could be genetic, and some kind-of-recent studies have backed up that theory. But this recent study from Northwestern University, which focused on a few specific beverages classified by taste category, didn’t reveal many genetic correlations. Instead, it appears that drink preferences are based more on psychological factors. In other words, I don’t love coffee because I love the taste of bitterness; I love coffee because I love the caffeine boost. Another study suggests that I don’t even need to taste my coffee to benefit from it. The psychological association between coffee and alertness means that our minds can be “primed” (that is, influenced) by coffee-related cues, like seeing a picture of coffee. In this study from the University of Toronto, participants’ time perception and clarity of thought was affected just by being reminded of coffee. (You’re welcome for the coffee picture I’ve included here)

I’ve come across a couple other interesting brain-related articles. For one thing, there have been recent developments in the understanding of dementia. We’ve known for a while that Alzheimer’s disease is correlated with (and probably caused by) the accumulation of certain proteins in the brain, but researchers have now identified a different type of dementia caused by the buildup of different proteins. In the short term, this isn’t a life-changing scientific development; this newly acknowledged disorder (called LATE, which stands for Limbic-Predominant Age-related TDP-43 Encephalopathy) has the same symptoms as Alzheimer’s and doctors can only tell the difference after the patient’s death. But in the long run, this new information may help doctors forestall and/or treat dementia.

Meanwhile, researchers are working on developing the technology to translate brain activity into audible speech. The idea is that this will give non-verbal people a way to speak that is a lot more user-friendly and authentic than what exists now.

In other neurological news, the long-standing debate about neurogenesis seems to have found its answer. The question is whether our brains continue to make new brain cells throughout our lives. Some neurologists argue that, once we’re adults, we’re stuck with what we’ve got. In the past, researchers have looked at post-mortem brains and seen little or no evidence to indicate that any of the brain cells were new. But in this study, the researchers made sure that their brain specimens were all from people who had only recently died. This time, there were lots of brain cells that appeared to be brand new. The brains without these new cells were situations in which the deceased person had Alzheimer’s; evidently, dementia and neurogenesis don’t go together. (The question is whether dementia stops neurogenesis or whether dementia is caused by a lack of neurogenesis. Or perhaps neither directly causes the other and there are factors yet to be discovered.)

In somewhat less groundbreaking neurology news, one study from the University of Colorado has shown yet again that extra sleep on the weekend doesn’t make up for sleep deprivation during the week. (This article makes it sound like a new discovery, but medical science has been saying this for a while.) 

Name one thing in this imageAll of this neuroscience stuff makes me think of a picture I saw that was originally posted on Twitter a few weeks ago. Apparently, it attracted a good deal of attention because a lot of people found it creepy. The picture looks like a pile of random stuff, but none of the individual objects are recognizable. Psychologically, that’s just weird. It brings to mind the intriguing psychological phenomenon known as the uncanny valley.

Uncanny ValleyThe uncanny valley refers to the creepy feeling that people get from something non-human that seems very humanlike. For example, robots with realistic faces and voices are very unsettling. If you don’t know what I mean, look up Erica from the Intelligent Robotics Laboratory at Osaka University. Or just Google “uncanny valley” and you’ll easily find plenty of examples. Although the concept of the uncanny valley generally refers to humanoid robots, the same thing is true of other things, like realistic dolls or shadows that seem human-shaped. It’s why most people actually find clowns more creepy than funny, and it’s one of several reasons that it’s disturbing to see a dead body. The term “uncanny valley” refers to the shape of a graph that estimates the relationship between something’s human likeness and the degree to which it feels unsettling. Up to a certain point, things like robots or dolls are more appealing if they’re more human-like, but then there’s a steep “valley” in the graph where the thing in question is very human-like and very unappealing. This tweeted picture of non-things isn’t quite the same thing because it doesn’t involve human likeness. But there’s still something intrinsically unsettling about an image that looks more realistic at a glance than it does when you look more closely.

So what’s the deal with this picture? It was probably created by an AI (artificial intelligence) computer program designed to compile a composite image based on images from all over the internet. Essentially, the computer program understands what a “pile of random stuff” should look like, but doesn’t know quite enough to recreate accurate images of specific items. This is an interesting demonstration of the current capabilities and limitations of AI technology. Essentially, AI programs mimic a real brain’s ability to learn. These programs can glean information from outside sources (like Google images) or from interactions with people. They then use this information to do things like create composite images, design simulations, and to mimic human conversation, whether text-based or audio.

Only relatively recently have AI programs been in common usage, but this technology now plays a large role in our everyday lives. Obviously, there are devices like Siri and Alexa that are capable of actual human conversation, and self-driving cars are now becoming a reality. Technically, things as mundane as recommendations on Netflix or Amazon are examples of AI, and AI programs are used for simulations and analysis in areas such as marketing, finances, and even health care. Recently, medical researchers have found that AI is creepily accurate at predicting death. And science is continually coming up with ways to advance AI technology. (I admittedly don’t understand this article explaining that magnets can make AI more human-brain-like, but it’s interesting anyway) 

In the interest of time, I’m going to end this blog post here. It’s gotten very long even though I’ve cut out probably about half of the stories I’d intended to include. If all goes as planned, I’ll have another one of these to post in a couple weeks.

Advertisements

Where Credit is Due: Osbourn Dorsey

Leave a comment

Who’s the most famous person of all time? Who else is near the top of the list? How do you quantify fame anyway? Is it all about the amount of people who recognize a celebrity’s name, or do we need to take into consideration other factors, like how well-liked the famous person is, or how much information most people know about them? How do we compare historical figures to contemporary figures?

There are a number of “most famous people” or “most influential people” lists out there. This website, which is essentially a continuation of a 2013 book on the topic, ranks people according to an algorithm that takes several factors into consideration. This user-generated list is also interesting to browse. Although Jesus is at the top of both lists, and the classical Greek philosophers fare pretty well on both, the similarities pretty much end there.

It’s also worth noting that “most famous” and “most influential” are not the same thing. I think that we tend to assume that two things are pretty closely correlated, at least as far as historical figures go, but I don’t think that’s necessarily true. Sure, the most famous historical figures are famous specifically because they did things that shaped the course of history. But there are other people who aren’t household names even though their inventions, ideas, or accomplishments have had an impact on our everyday lives. So, just for the fun of it, I’ve decided to start an ongoing series of blog posts to write about these forgotten figures.

I’m starting with someone so obscure that I can’t even find a Wikipedia article about him. To find any sort of biographical information, I’ve had to resort to census records, city directories, and slave emancipation documents. The inventor in question was born a slave, freed at the age of eight months, and didn’t show up on many documents and records after that. But at the age of sixteen, he invented a very common and handy device that you probably use every day: the doorknob.

Osbourn Dorsey was probably born in September 1861. His mother’s name was Christina Dorsey and he had two older siblings, Mary and Levi. We know this from the Washington DC slave emancipation records from April 1862, where he is listed as “Osbourn Dorsey- son of the above named Christina- Aged about eight months- ordinary size- dark complexion.” For the record, Mary was six years old and also “ordinary size” while Levi was four and “large in stature”. The children’s father is not mentioned. Mary Peter, the Dorseys’ former owner, submitted a petition for compensation after they were freed. Evidently, Mary Peter had no slaves other than Christina Dorsey and her three children. They had previously belonged to a family by the last name of Washington, but after Ann Washington died, Mary Peter acquired the Dorseys in April 1861, prior to Osbourn’s birth. Mary asked for $1350 in compensation for the freeing of her four slaves.

We next see Osbourn Dorsey in the 1870 census, although it lists him as being eleven years old, which must have been an error. The other members of the household were his parents, (the father’s name is Levi) a “domestic servant” named Barbara, and three siblings: Mary, Levi, and a younger sister named Cecilia. According to the 1880 census, 18-year-old Osbourn worked for a butcher and lived with his parents, sister, brother, and brother-in-law named Isaac Williams. Cecilia is not listed.

DoorknobThe salient part of this story came shortly before that 1880 census. On December 10, 1878, patent #210,764 was issued to Osbourn Dorsey of Washington DC, who had “invented certain new and useful improvements in door holding devices”. The diagrams and written description are clearly recognizable as what we now call a doorknob. (Although it has more parts; it involves a rod that extends horizontally between the doorknob and the doorframe.) There are two very important things to note here. One is the surprising fact that doorknobs have only been around for a little over 140 years. The other is that the doorknob was invented by someone named Dorsey, which is hilarious. It was my favorite fun fact for months; I have annoyed many people with this knowledge.

The name Osbourn Dorsey does show up in city directories and a couple censuses. Actually, it shows up a little too often; it would appear that there were at least three Osbourn Dorseys living in Washington DC in the late 1800s and early 1900s. Because of that, it has to be acknowledged that it’s possible that I’ve been looking at the wrong Osbourn Dorsey. The inventor of the doorknob was definitely not the Osbourn Dorsey who was born in 1878 and was incarcerated as of the 1900 census.

But it could have been the Osbourn Dorsey who was born around 1830, worked as a janitor, had a wife named Rachel who died prior to 1910, and had either two or three children. It appears that he had two daughters named Cora and Christy, but that his household also included a boy named William Smith who later married Christy. However, neither the 1870 nor 1880 census records specified that William’s last name was not Dorsey or that he was not the son of the head of the household.

As a side note, if you Google the name Osbourn Dorsey, you might find a picture that has been posted in various places with his name, but that is incorrect. It’s actually of James Meredith, a civil rights activist who was more than sixty years younger than Dorsey.

I’m just going with the Osbourn Dorsey born in 1861 because that the estimated birth year that I saw on a couple websites that may not be entirely reliable. Also, the city directories from 1907 to 1910 list this Osbourn Dorsey as an engineer, so it makes sense to speculate that he’s the one who had patented a significant invention. Unfortunately, I can’t find anything to indicate when Osbourn the Engineer died, or whether he had a wife and children. Also, I find it interesting that these two Osbourn Dorseys never seem to be listed in the same city directory. Yet they can’t actually be the same person; they both show up in the 1870 and 1880 censuses, and the older Osbourn Dorsey was an adult by the year 1870.

Maybe someday, someone will find an old diary or some letters that will clear up this mystery, or maybe someone will figure it out just by poring through these same records more thoroughly than I have. (To be honest, I have spent way too much time on this. It’s a little ridiculous.) If you know more than I do, please share your information in the comments. But as things stand now, we know very little about this brilliant inventor who changed the world by revolutionizing the way we open and close doors.

On Personality Types

1 Comment

Myers Briggs 2Every few months, it seems that the people of social media collectively rediscover their love for the Myers-Briggs model of personality types. One day, people are posting about politics and television shows and kittens, and then suddenly the next day, it’s all about why life is hard for INTPs or 18 things you’ll only understand if you’re an ESTJ. (I, for the record, am apparently an INFJ) There’s just something about the Myers-Briggs Type Indicator that is fun, interesting, and at least seems to be extremely helpful. For all of the critical things that I’m about to say about it, I admittedly still try the quizzes and read the articles and find it all very interesting. I don’t think it’s total nonsense, even if it isn’t quite as informative as many people think. I should also acknowledge that there’s some difference between the quick internet quizzes and the official MBTI personality assessment instrument, which should be administered by a certified professional and will cost some money. However, the internet quizzes use the same personality model with all the same terminology, and they usually work in a similar way, so I think it’s fair to treat the quiz results as a pretty good guess of your official MBTI personality type.

I personally think that one of the biggest reasons for the appeal of the MBTI and other personality categorization models is that human beings have an inherent tendency to categorize people. It’s why we like to talk about astrology signs, Harry Potter houses, and whether we were a jock or a nerd in high school. I suspect it’s also the main reason that so many people are so passionate about their favorite sports team. When you fit into a clearly defined group, it means automatic camaraderie with the other members of the group.

When personality traits are the criteria for inclusion in a certain group, there’s another perceived benefit. If there is a specific number of personality types and you can identify which one describes you, then you can more easily find advice and information that are specifically relevant for you. To see what I mean, just google “advice for ENFP” or any other Myers-Briggs type. The internet search results are seemingly endless, and at least at a glance, it looks like most of those websites and articles and blog posts are legitimately offering practical advice.

MirrorWhoAmIWomanBefore editing this blog post, I had a paragraph here in which I got sarcastic about the concept that “knowing” yourself is the answer to everything. That was an unnecessarily lengthy tangent, but the point remains that the appeal of personality type models is the perceived promise of practical applications. It stands to reason that self-knowledge means recognizing your strengths and weaknesses, learning to make the right decisions for your own life, and improving your ability to communicate with others, especially if you know their personality types as well. Once you know what category you belong in, you can find personalized guidelines for all of these things.

Unfortunately, it isn’t that simple. There aren’t precisely sixteen distinct personality types; there’s an infinite array of personality traits. Every person is a little different, and what’s more, everyone changes. Not only does your personality change as you grow up and age, but also, if you’re like most people, your attitudes and behaviors vary a little bit from day to day. The very definition of “personality” is a psychological and philosophical debate that can’t be answered by a five-minute internet quiz. It can’t even be answered by a longer questionnaire with a registered trademark symbol in its title. Even if we were to oversimplify everything for the sake of argument and assume the continuity of personality, (which is more or less what I’m doing for the rest of this blog post) it’s a very complicated topic. There are entire graduate-level courses on theories of personality. I know this because I’m nerdy enough to actually browse graduate course descriptions just out of curiosity. The fact of the matter is that the Myers-Briggs Type Indicator is only one of numerous attempts to categorize and describe personality, and its origin is less academic and credible than most of the others.

Briggs and MyersAs a quick Google search can inform you, the Myers-Briggs Type Indicator was created by a mother/daughter team (Katherine Cook Briggs and Isabel Briggs Myers) and based largely upon their subjective observations. Upon discovering the works of pioneering psychologist Carl Jung in the 1920s, they essentially blended his theories with their own. The MBTI as we know it today was created in the 1940s and reached the awareness of the general public when Isabel Myers self-published a short book on it in 1962.

For those who aren’t already familiar with it, the MBTI is made up of four dichotomies. A personality type consists of either “extroversion” or “introversion”, either “intuition” or “sensing”, either “thinking” or “feeling”, and either “judging” or “perceiving”. Since there is no “both”, “neither”, or “in the middle”, there are sixteen possible combinations. Some are significantly more common than others, which I find interesting because my personality type is supposedly the most rare. 

Although Katherine Briggs and Isabel Myers did a lot of reading on the topic of personality, and both women were quite well-educated by the standards of their time, neither had formal education in psychology, and their theories were not studied and tested very extensively. For those who are interested, the Myers & Briggs Foundation website describes the development and early uses of the MBTI, but I notice that the purpose has always been just to categorize people, not to evaluate the usefulness of the MBTI itself. It doesn’t appear that anyone ever thought to analyze data and see whether Myers-Briggs personality types can predict or correspond to any other data. 

Personality book

I think that this is the book I recall reading as a teenager.

The Five-Factor Model, also known as the Big Five or as the OCEAN model, was developed by using factor analysis on survey data. It dates back to the early 1980s and should be largely attributed to researchers Robert McCrae, Paul Costa, and a little bit later, Lewis Goldberg. However, there were multiple studies conducted by different organizations that produced the same results. Researchers started with a long list of words used to describe personality, surveyed lots of participants on their own personalities, and used statistical calculations to determine which personality traits tend to be lumped together, resulting in five different categories of traits. The end result admittedly looks fairly similar to Myers-Briggs types, except that there are five variables instead of four. However, the simple fact that these five variables were determined by statistical analysis makes it far more science-based than the Myers-Briggs test.

Big Five 1The “Big Five” personality traits are called Openness, Conscientiousness, Extroversion, Agreeableness, and Neuroticism. (When I first read about this, they were ordered differently. I suspect they were rearranged specifically for the sake of the OCEAN acronym.) Each is a spectrum rather than a dichotomy as in the Myers-Briggs model. For example, I’m high on the Conscientiousness, Agreeableness, and Neuroticism scales, but in the middle on Openness and very low on Extroversion. I’ve seen multiple internet articles that describe this model completely inaccurately, so I want to stress that there are five factors, not five personality types. The Five-Factor Model doesn’t sort people into just a few categories the way the MBTI does because it acknowledges a middle ground. Depending upon which questionnaire you use, your results for each of the five factors might be numbers on a scale between 1 and 100, or they might be phrased as “very high”, “high”, “average”, “low” or “very low”. Either way, your personality includes a ranking on each of these five factors. This is one of the things I like about the Five-Factor Model. The stark dichotomies of the Myers-Briggs model might be convenient for categorizing people, but they sure don’t accurately portray the nature of personality.

Lexical hypothesisThere’s concern that even the Five-Factor Model is too subjective and unscientific. It’s based on the lexical hypothesis, which is the concept that personality is so central to the human experience that any language will necessarily develop the terminology to accurately describe and define it; therefore, making lists of words is a perfectly reliable and objective starting place for analyzing personality traits. While I personally find that idea fascinating and very plausible, it obviously leaves some room for doubt and criticism. Does language really reflect the nature of humanity so accurately and thoroughly that we can rely on linguistics to discover truths about psychology? Maybe, but it’s not very scientific to assume so.

To get a really clear, specific, and objectively scientific idea of what “personality” is and how it differs from person to person, we’d have to study personality from a neurological perspective. We’d have to consider whether there’s something about a person’s brain structure or neurological processes that make them more likely to behave or think in a certain way. My understanding is that neurological theories of personality are currently in the works, although I’m only finding information about studies that are still based on survey data. Still, it’s plausible that we’re only a few years away from being able to describe personality in terms of brain function. I find that a lot more compelling than a couple of Victorian ladies speculating about why Joe Schmoe acts nothing like Mr. So-and-so.

I’m far from the only one out there to complain about the prevalence of the Myers-Briggs description of personality. Lots of people, many of whom are more educated than I, have pointed out the lack of scientific data backing the Myers-Briggs test. But that raises another question. The Myers-Briggs Type Indicator has been around for a very long time, and it’s been discussed quite a lot, even in academic contexts. So why hasn’t it been put to the test? Why not do a study in which participants take a questionnaire (or perhaps self-identify their Myers-Briggs type) and then complete certain tasks or engage in certain social activities with one another, while researchers observe? The study would evaluate a few clear, objectively measurable hypothesis based upon things like which personality types will talk more than others, which personality types will perform better on various types of logic puzzles, or which personality types will be the best at remembering details about the room in which they took the personality questionnaire at the beginning of the study. The results of the personality questionnaires, of course, will be confidential until after the observation portion of the experiment. The researchers mustn’t know which subjects belong to which personality types. They will just take note of each individual subject’s actions and interactions, then bring the personality type into the equation later, when it’s too late for observational bias to rear its head. In theory, if the Myers-Briggs personality types are as reliable and clear-cut as people claim, then there would be extremely strong correlations between personality types and the behaviors and cognitive traits measured by this test.

BrainI’ve seen some articles that do claim that the Myers-Briggs Type Indicator is evidence-based, but so far, I can’t find any actual research cited. I expect that there probably has been some research done at some point, but nothing that shows beyond a doubt that a person’s Myers-Briggs personality type is useful for predicting behavior, analyzing strengths and weaknesses, or making decisions. The skeptics who compare the MBTI to astrology are not entirely wrong. Personally, my expectation is that any kind of objective analysis would indeed validate the idea that there are different personality types, and that a person’s personality type has some correlation to their behaviors or cognitive patterns, but that the correlations won’t be as strong as people would expect, and that the Myers-Briggs would prove to be less precise than other models based on factor analysis. I’m looking forward to the further developments that are sure to come along soon now that we’re seeing such advances in neuroscience.

Best Books of 2018

Leave a comment

It’s been a long time coming, (that’s code for “I’m lazy and I write slowly”) but I’ve finally finished putting together my list of the best books of 2018! I posted it on my other blog, so I’m reblogging it here.

Librarian Magdalena

This list is going to work a little differently from those I wrote in previous years, because frankly, I didn’t read as many 2018 books as I’d have liked. I’ve questioned whether it’s even worth putting together a Best Books of 2018 list. But let’s be real here, lists are always worth making. Still, there are hundreds and hundreds of noteworthy books coming out every year, and even if we count picture books, I only read slightly over one hundred new books this year. I especially dropped the ball on YA books. And I didn’t get around to reading a single novel in verse this year, not even The Poet X, which won the National Book Award or Rebound, which is the prequel to the Newbery award winner from 2015. So I acknowledge that my personal Best Books of 2018 list isn’t an exhaustive list at all.

For this year…

View original post 4,459 more words

Stuff I Haven’t Done

2 Comments

This winter, I didn’t do much besides sleep. I did go to work, Walgreens, and occasionally church, and I’ve gotten myself severely hooked on this game.  (I recommend changing the size. 50×20 is a lot more fun and still fits on the screen if you zoom out to 80%) But that’s about it. If I think about it, I can come up with a pretty long list of valid excuses, several of which have been medically verified. The point, though, is that I really need to get myself doing Stuff again.

By now, we’re well into springtime, regardless of whether we’re going by the solar calendar, social conventions, or the weather. (On second thought, let’s not bring the weather into this. The weather is far too fickle to be given any kind of authority over people.) To be honest, I think that the season only has a slight impact on my productivity, but we’re going to pretend otherwise because that allows me to believe that I’ll soon have my life totally under control. It’ll be all good by summer at the absolute latest, right?

As a side note, I don’t understand why it’s considered negative or pessimistic to say you’re having a bad day. (Or week, year, semester, or any other unit of time) It seems to me that it shows optimism when someone says they’re having a bad day, because the implication is that the next day is likely to be better. Either the problem(s) at hand is/are minor and will quickly be resolved, or the situation will look better from the fresh perspective of a new day. Someone who’s being pessimistic and negative will think otherwise and expect current problems to stick around or to have ongoing repercussions. That attitude and expectation would not be accurately represented by talking about time in concrete terms.

But I digress. The semantics are irrelevant to my announcement that I intend to do Stuff.

At this point, I’m not committing to great feats of Thing-doing. I’m talking about relatively little Stuff. For example, I did a Thing last Monday. I came home from work, ate food, and read part of a book. (If you don’t see how that qualifies as a Thing, you can consider yourself lucky that you don’t fully comprehend the level of Not-Doing-Stuff that I have achieved) I’ve been regularly doing a Thing I like to call “Getting Into My Bed Before Falling Asleep Instead of Just Sleeping on the Living Room Floor”. (Okay, that’s not true, I’ve actually been calling it “Going to Real-Person Bed”, which doesn’t really make any sense now that I see it written out) At this point, I’m hoping to move onto bigger and better Things like establishing a consistent morning routine, regularly vacuuming the cat hair from my carpeting, and spending most of my free time in ways that are somewhat meaningful to me, like reading and writing. Notice how realistic I’m being here. I think I deserve some credit for setting realistic goals because I hate realistic goals. I much prefer unrealistic goals.

The main reason that I’m writing all of this is to avoid the generic blog post that basically just says, “I haven’t been blogging in a long time, but I think I want to get back to it.” So instead, I’m offering this brief ramble in which I vaguely overshare what’s been going on in my life. (Do I get extra credit for the double oxymoron?) But the sentiment is essentially the same. All of that was basically a lead-up to me saying that I hope I’ll be posting more stuff soon.

Long, long ago, the last time I posted something on this blog, it was supposed to be the first of a three-or-four part series. That’s still technically the plan, but the other parts aren’t coming for a while. It’s not that I was having a particularly hard time writing Part Two, or that I didn’t know what it should say. In fact, my draft in progress is quite long. It just isn’t very interesting. And right now, I’m not going to put time and effort into writing anything boring.

Rambling about Millennials, Part One

1 Comment

blog picture Generation Me

Pictured: said book

I recently read a book from 2006 that commented that we hadn’t yet coined a term to label the age demographic that comes after “Baby Boomers” and “Generation X”. Although that book wasn’t very outdated otherwise, that one sentence is now inaccurate and actually kind of funny. At some point shortly after that book was published, the media fell in love with the word “millennial,” and for a while now, it’s been consistently used as the name of a certain demographic group. The millennial generation is roughly defined as those who were children at the change of the millennium, although some have specified that millennials are those born between 1982 and 2004. (That parameter evidently was first laid out by authors Neil Howe and William Strauss, whose theories are more speculative than empirical, but worth googling if you find yourself with a few spare minutes)

blog picture HelloAt any rate, since I was born in 1991, I’m definitely well within this range and am indubitably a millennial. As such, I have a lot I’d like to say on various subtopics of millennial-ness, some of it addressing generalizations and some of it describing my own theories that are also more speculative than empirical. In fact, I have too much millennial-themed potential content to stick it all into one blog post, so this is going to be a multi-part series. (At this point, I’m thinking it’ll be four parts) A logical starting place is the very concept of categorizing people into specific age demographics.

Personally, when I was a child, I was under the impression that humanity essentially fell into three groups: children, teenagers, and adults. Sometimes, it might be convenient to sort adults into the categories of parent-aged adults, grandparent-aged adults, and adults older than my own grandparents, but for the most part, I thought of “growing up” as a sort of finish line. Getting there might be a gradual process, but once you passed the line, you were done, and you were just as grown-up as any other grown-up. Of course, I found out long before turning eighteen that a person’s entire lifespan, and not just childhood, is a series of changes and landmarks. But it still came as a bit of a surprise when, well into my twenties, the society around me still didn’t consider me fully adult. To some extent, I think this is a current trend caused by social and economic factors; the age of financial independence has been pushed far past the age of legal adulthood or physical maturation. That’s something I intend to write much more about later. But this isn’t entirely a modern thing; it’s always been true that there are major distinctions between different age categories even within adulthood.

If we’re talking about biological aging or cognitive changes or the gradual accumulation of knowledge, I would imagine that aging has happened at the same rate for at least many centuries, if not for all of human history. But if we’re talking about intergenerational differences, I think that things have really sped up since the mid- to late- 1800s. For the last 150ish years, technology has developed so rapidly that each generation is growing up in a very different setting than the last one.

blog picture phonesTelephone history serves as an obvious example. After Alexander Graham Bell got his telephone patented, it took 46 years before a third of American households had telephones. At the time, that surely seemed like a major cultural shift. Communication was suddenly much faster and easier; the telephone changed the way we stay in touch with family and friends, seek help in emergencies, and interact with coworkers or customers. Yet 46 years seems like an awfully slow transition by today’s standards. Now, over three quarters of Americans own smartphones, just 23 years after the first one was invented, and it’s been a mere 10 years since iOS and the android operating system came into being. (The slightly-used iPhone 4 I bought in 2014 is so outdated that I’ve had strangers stop me to ooh and ah over my antique phone. I am not even kidding about that.) Similar statistics apply to various other appliances and devices.

But it’s not just about technology; along with those changes come shifts in every aspect of culture, from fashion and music to the prevalent philosophies and worldviews. The Renaissance period lasted for about three or four centuries, and the industrial revolution was several decades long, (anywhere from 60ish years to almost 200 years, depending upon what source you consult) but in recent history, we talk about decades rather than eras. I don’t think that’s a matter of nomenclature; I think that many of us genuinely think of the ‘80s or the ‘90s as bygone eras.

Long before I read the book that I mentioned at the beginning of this blog post, I was formulating an explanation of generational differences (especially in terms of political opinions) that was based on these types of changes. It’s more than just technology and popular culture that changes over time; it’s also the political environment and the economic state of affairs. For example, I was born just as the Soviet Union was breaking up and the Cold War was ending. Although there has obviously been international blog picture cold wartension and conflicts since then, (and one can certainly argue that some of it is linked to the events and attitudes of the Cold War era) the fact of the matter is that I grew up in a political environment very different from that of the previous few decades. The post-nuclear landscape was just a sci-fi setting rather than a plausible fear, “terrorism,” was a more common and frightening buzzword than “communism” and we didn’t talk about “mutually assured destruction” because we all knew that the USA is a superpower and that we had less to fear from actual war than from school shootings, suicide bombings, and the like. Even the terrorist attacks of 2001 and the more recent threats from ISIS are recognized as originating from fringe groups, not from entire nations. It’s a commonly accepted fact that people instinctively fear or dislike “the other”, but I’d posit that it’s a much weaker instinct for those of us who grew up in post-Cold-War America. Whether you see that as good or bad, whether you call it “tolerance” or dismiss it as extreme liberalism, I think it explains a good deal about intergenerational differences in political opinion.

My point here is that any explanation of “why millennials are so…” has to take into account the various factors that made the ‘90s and ‘00s different from, say, the ‘70s and ‘80s. I’m not going to pretend to have sufficient expertise in sociology, childhood development, politics, economics, etc., to make a comprehensive list of all such factors, but I can certainly suggest a few that I think are major ones. As I discussed in the paragraph above, the end of the Cold War makes a difference. Perhaps even more significantly, modern technology has greatly increased the speed of communication, and it’s also meaningful that the entertainment industry has made more rapid technological advances than other fields. While commercialization has been an issue for generations, advertising is just getting more insidious and subliminal all the time, subtly altering our collective priorities even as we become less and less trustful of mainstream media and of rich and powerful people. And the emphasis of self-esteem in parenting and education is a big deal too; in fact, it’s the main topic of the book I’ve mentioned a few times now. Sure, that trend originated in writings from around the turn of the century, but it picked up steam slowly, and my generation is probably the first to be indoctrinated into it enough to experience the drawbacks. Much more on that later.

Another biggie is the changing views on education. As higher education has gotten more and more common over time, it’s also become more and more necessary. We’ve reached a point where a college education is not only essential for success in most career paths, it’s also a social expectation for the entire middle class and those from wealthy families. But higher education has also gotten more expensive over the past few decades, and educational loans have become more common and much larger. For the last decade or two, it’s been considered normal to take out student loans by the thousands and tens of thousands. So that’s another thing that makes the millennial experience different than that of earlier generations: It’s now normal and supposedly inevitable for young people to enter adulthood with astronomical debt. blog picture student loansNo longer is debt something that happens to you if you hit hard times or make bad life choices; now it’s practically a coming-of-age landmark. And in general, it’s the people who rack up more debt who become recognized as high achievers and those who make decisions enabling them to avoid debt who are thought of as inferior, or at least less successful. It’s no wonder that young adults are more likely than older adults to believe that the government is responsible for our financial well-being. Socialism sure does sound nice when long-term debt is normal and when the “right” life choices are more expensive than the “wrong” ones.   

I’m not saying any of this to speak against or advocate for any particular political/economic stance. (For what it’s worth, I’m actually much more conservative than the average  or stereotypical person of my age demographic.) My point here is that “millennial” attitudes make sense in context. If I follow the vague outline I have for this blog-post-series, that concept of context is going to be really the central point of the whole thing. When you think about it, the only difference between generations is context. If you could somehow ignore the effects of cultural influences, technology, socio-economic circumstances, political environment, and social expectations, everything that’s left (basic personality traits, appreciation for things like nature or music, capacity for learning, etc.) might vary from person to person, but is pretty much constant from generation to generation.

Some Month-Old Thoughts on Politics and Patriotism

1 Comment

America picture 2A month ago today, our country celebrated the 241st anniversary of the day the Declaration of Independence was signed. As is fitting, I spent much of the day contemplating the meaning of patriotism, the quintessentially American rhetoric about liberty and freedom, and the relationship that those concepts have with morality in general. (Does patriotism make you a good person? If someone loves America, does that make them complicit with the shortcomings and injustices that exist in our society? Can an individual be proud of their country but yet dislike their government?) This is why I take ridiculously long showers, y’all. I had intended to blog about that topic later in the day and had even mentally formulated much of the content of that blog post. It would have been long, philosophical, and maybe a little bit boring. So I never got around to finishing it. But now, upon opening the Word document containing the very beginning of a very rough draft, I’d like to go back and use some of that content. What follows is a slightly edited version of what I wrote a month ago.

In the grand scheme of history, 241 years is an extremely short period of time. But since it is significantly longer than the human lifespan, every twenty-first century American views the Declaration of Independence as distant history and takes for granted (to some extent) the ideas it expressed.

Of course, those ideas weren’t completely new and original even at the time. The founding fathers were inspired by Enlightenment philosophy, perhaps most notably the writings of John Locke. And the quintessentially American emphasis on rights traces its roots to the Magna Carta of 1215. But 800 years is still only a small fraction of the millennia that organized government has existed. Besides, the Magna Carta was only about the relationship between the monarchy and the nobility, not the rights of the common people. And until the eighteenth-century, the concepts of equality and human rights didn’t play a large role in politics.

I think that we modern Americans don’t often think about just how new our “unalienable” rights are. It is certainly a beneficial thing that we have things like anti-discrimination laws, the freedoms laid out in the Bill of Rights, and the opportunity to vote for our leaders, but none of those things are universal throughout human history. That’s why we’ve made a holiday of the anniversary of the Declaration of Independence. As Americans, we’re proud that our national identity is all about freedom, equality, and democracy.

Or is it? Take a look on social media or the news, and you’ll see lots of complaints about rights being denied, demographic groups being marginalized, voices not being heard, and needs not being met. Some of it may be petty or even inaccurate, but much of it will be valid. Despite our rhetoric about “life, liberty, and the pursuit of happiness,” the United States of America is not a utopian nation. At any given time, few if any American citizens are satisfied with the government, and most politically-informed Americans have feelings of animosity against fellow Americans with different political opinions. It certainly seems as if Americans hate America.

I would argue that this is a side effect of a democratic government. Because we elect our leaders and thereby have some degree of influence in our government, we pay much closer attention to politics than the average person in, say, medieval Europe. Most of us are more informed than we probably would be if we didn’t have any voice in our political system. All of us who make an effort to be well-informed are qualified to form and express stances on at least a couple specific issues, and many of us are to some extent emotionally invested in those issues. That’s not because we’re jerks who like to argue, it’s because the outcome could affect us or our family, friends, and neighbors. If I’m strongly against a particular proposed bill, or I actively dislike a certain candidate, it’s probably because I anticipate a negative impact on my day-to-day life, the life of someone I care about, or society as a whole. So when others support that bill or that candidate, it’s going to bother me. Personally, I try very hard not to be judgmental, but it’s hard not to question others’ morals or intelligence when they’re “wrong” about politics.

I believe that, in general, most political debates are far more complex than we tend to think, and that our opinions are less about right versus wrong than about assumptions that we don’t even realize aren’t shared. A lot of it comes down to the fact that, when our political ideology promises us all such broad rights and freedoms, there will be situations where there’s a conflict between one person’s rights and another’s. For instance, where does “freedom of speech” go too far and become discrimination or hate speech? At what point is “self-defense” too preemptive to be justified and lawful? Is it better to regulate immigration as much as possible to avoid letting dangerous, “un-American” people into our country, or do our American values dictate that we should welcome newcomers without discrimination and gladly grant them those rights we’re so proud to have?

And more broadly, what does the government owe citizens? Is education a right? And if so, how much can the government reasonably do to ensure the quality of public education? Is quality, affordable health care a right? And if so, what can the government reasonably do to ensure the quality and affordability of health care? To what extent does the government owe us financial assistance if we need it? And is it a good or bad thing if the government cuts funding to public services, financial aid for education, welfare programs, scientific research and the arts, etc. in order to lower taxes and/or decrease debt?

These are some of the questions that create partisan divisions and turn us against our fellow citizens. They are examples of the issues that cause us to dislike particular leaders and fear for the future of our nation. And all of these questions ultimately come down to our interpretations of freedom and rights. So how does patriotism fit into the picture? How can we love America if we can’t even agree on what exactly our American values are?

The initial plan was for this blog post to actually answer that question. I was going to have a lot to say about the history and ideologies of nationalism, populism, and globalization. It was going to touch upon the difference between cultural identities and officially delineated countries. It was going to include a tangent on separation church and state, as well as a very long and involved tangent about the relationship between church and state. It may have discussed topics relating to American superiority, ranging from the “city on a hill” rhetoric of very early colonial days to the controversies about current American military presence in other countries.

And it was somehow going to come to a nice, neat conclusion that would tie all of those threads into a surprisingly small and pretty little knot. I don’t know exactly how that would have happened, but it would have had something to do with the idea that both patriotic fervor and political vitriol are often motivated by goodwill for people in the society around us. Thus, it’s all good. No one is in the wrong except Hitler. It’s going to take a few more generations before it’s socially acceptable to include Hitler in any overarching statements about human goodness.

(By the way, the answer is no, if I could go back in time and kill baby Hitler, I wouldn’t. Instead, I would go back in time and tell teenage Hitler what a great painter he is and how important it is that he never, ever give up his art. Don’t let the Academy of Fine Arts in Vienna crush your dreams, Adolf. Just keep painting and the world will thank you.)

But that would have taken much more time than I had available and much more research than I was prepared to do, not to mention that it would have been far too long for a single blog post. Maybe I’ll come back to some of those topics later. But probably not. Those long showers of mine mean that I will always have more blogging ideas than blogging time.

Older Entries