Skip to main content

Is AI killing our ability to think? - 22 December 2025

I've been troubled about technology's threat to human consciousness since reading "The Glass Cage," by Nicholas Carr. Here's a link to my review of that book: "The Glass Cage" by Nicholas Carr | Stephen DeWitt TaylorCarr's book was published in 2014, before the frenzied discussion on AI in recent years began to occupy our minds. Carr wondered if living in a dumbed down state was the operative paradigm for too many Americans today.

From reading the "Glass Cage," I learned about the Dancing Mouse study, by Harvard psychologist Robert M. Yerkes, published in 1907.

Yerkes was given fifty mice by a friend. He decided to perform an experiment with the mice. He put the mice in a box with two exits. One exit was colored white, and the other exit was black colored. The smell of cheese on the other side of each exit enticed the mice to go through one of the two doors to get the cheese. The mice who went through the white exit happily raced to the cheese. The mice who went through the black exit were given an electric shock. Yerkes wanted to see how quickly a mouse shocked in the black exit would start to routinely take the white exit. Yerkes performed the shock treatment at three levels: light, medium, and intense. He surmised that the mice going through the black exit, shocked intensively, would be the quickest to learn to take white exit. He was wrong. The mice receiving the medium shock learned the quickest to take the white exit. The mice receiving the benign shock were next... they learned slowly... didn't care much one way or the other which door they went through, but eventually, albeit slowly, they learned to take the white exit. The heavily shocked mice just went nuts. They didn't learn a thing. Many of them would repeat entry into the black door time and time again, notwithstanding the heavy shock.

What's the conclusion of Yerke's experiment? That brains have an optimal level of challenge where they operate at their most effective. The idea in life, when learning or dealing with work or the day to day, is to operate at or near the optimal brain activity level. In this way we maximize personal growth and development. We give purpose to life if our brains are working effectively. If we're operating at more modest levels of brain activity, we're not growing. If we try to overload our brains we become dysfunctional. Either way, overload or underload, we're not optimizing our human potential. Only by challenging our brains at the optimal level can we avoid feelings of inadequacy and unhappiness. NOTE: Why Guaranteed Basic Income (GBI) is a bad idea.

One of the conclusions of the Yerkes experiment was that if we're operating at more modest levels of brain activity, we are not growing. Does the use of AI suboptimize brain activity and ultimately inhibit human intellectual growth?

For years I would take speaker notes from talks given at our ROMEO group in Park City, Utah, LSDM. I would transcribe those notes and synthesize them into a form of executive summary of the talk. Someone once asked me why I took the time to do this. I answered that the note taking and synthesizing exercised my brain. The process forced me to prove to myself that I actually had an understanding of what the speaker said. The notes would enable me to abet recollection at a later date if I wanted to. Those SDT synthesized speaker notes, through mid 2023, over five hundred of them, are archived on LSDM's website: http://www.lsdm-parkcity.com.

In mid 2023 I began to use an AI tool provided by Zoom to synthesize the speaker notes... to undertake the task that I had been doing manually since the early 2000's. At first, Zoom's "AI Companion" would make a lot of mistakes such as mistaking male for female names, missing key points of the discussion etc. Today, the AI Companion synthesizer requires almost no input from me other than cutting and pasting it into the LSDM website. In fact, the AI Companion synthesis can do a good job summarizing the talk of even a less capable speaker.

Question: If AI Companion can cut out all the note taking work and synthesizing of summaries I had performed for LSDM speakers, what will AI's impact be on a young person, who hasn't yet developed analytical skills? AI can produce a quality term paper for a high school sophomore without the student ever having to go through the thought process ordinarily required to write the paper. It's my understanding that schoolteachers are having a hard time preventing students from using AI tools to write their papers.

I don't have the answers, but, based on my own experience as a geezer transferring the note taking and synthesis process over to a tech tool, logic tells me that I am suboptimizing brain activity and inhibiting my intellectual growth. At eighty years of age, this doesn't matter so much. I am not worried for myself. But the effect of AI on intellectual development of the young seems to have more downside than upside as I see it currently. It seems to me that more and more Americans using AI will become like Yerkes' slow learning, black door, benign shocked mice, not caring one way or another which door they will enter.