Neursoscience – LONGITUDE.site https://longitude.site curiosity-driven conversations Sat, 21 Dec 2019 15:51:13 +0000 en-US hourly 1 https://longitude.site/wp-content/uploads/2018/08/cropped-Logo-O-picture-32x32.png Neursoscience – LONGITUDE.site https://longitude.site 32 32 Research of human language abilities may hold answer to computer language processing https://longitude.site/research-of-human-language-abilities-may-hold-answer-to-computer-language-processing/ Tue, 29 Oct 2019 16:44:52 +0000 https://longitude.site/?p=2289

 

Claudia Zhu
University of Pennsylvania
Philadelphia (39.9° N, 75.1° W)

 

featuring Tatiana Schnur, Associate Professor of Neurosurgery and Neuroscience, Baylor College of Medicine, Houston (29.7° N, 95.3° W)

Dr. Tatiana Schnur received her B.A. in Cognitive Science at the University of Virginia where she was an Echols Scholar and a Howard Hughes Medical Institute Biomedical Sciences Undergraduate Fellow. She received her PhD from Harvard University in Cognition, Brain, and Behavior. She completed a three-year National Institutes of Health (NIH) funded T32 postdoctoral fellowship in neurological rehabilitation at the University of Pennsylvania and the Moss Rehabilitation Research Institute. Dr. Schnur is currently an Associate Professor of Neurosurgery and Neuroscience at the Baylor College of Medicine in Houston, TX.

-From UTHealth


I’m not sure if you would believe me if I told you right now that you had superpowers. But if you consider the fact that humans are able to produce two to three words every second in verbal communication, draw from an average spoken vocabulary of around 40,000 words, and only make a mistake once every thousand words, we’re pretty remarkable. As humans, we have this ability that allows us to generate speech incredibly efficiently and accurately, yet according to Dr. Tatiana Schnur, professor of neurosurgery and neuroscience at Baylor College of Medicine, it is still “unclear what kind of information is required to produce language.” All we know is that humans are the only ones currently with this ability.

Dr. Schnur is one of the researchers who studies our ability to produce speech and understand language. She asks the big questions about how language works, such as “what information is in the system in the first place, how it is organized, and then how do you access it in order to use speech.” She designs rigorous paths to approach answering these questions. One way that Dr. Schnur conducts her research is by studying how and where the language system “breaks” in patients with brain damage who are no longer able to produce fluent speech. Consider a stroke, the equivalent of “taking a hammer to the system,” which impacts different people in different ways. By understanding how the damage manifests in patients, Dr. Schnur is able to deduce certain rules and test hypotheses based on the damage that the system has incurred.

Another area of Dr. Schnur’s research is working memory, which is one of the capacities that is used to produce words in conversation (think continuous flow of words). More specifically, Dr. Schnur is interested in whether one’s ability to hold on to information, their short-term memory, would help them produce longer strings of words. Recently, Dr. Schnur and her colleague Dr. Randi Martin published a paper that demonstrated working memory plays a large role in our ability to produce longer phrases of words, such as coming up with phrases like “the big red dog” as opposed to “the dog.” Dr. Schnur concluded that a possibility for recovery for patients who have problems producing multiword speech should focus on working memory rather than the language side. A number of potential types of therapy for patients with brain damage stems from Dr. Schnur’s research.

Currently, Dr. Schnur is working on a project to “assess the degree to which someone has a problem accessing the meaning of things.” Sometimes, stroke patients are unable to produce a word because they have lost the meaning of it. For example, a patient might look at a pair of scissors and say, “I don’t know how to use that anymore. I don’t know what it is.” In cases like this, patients have lost the meaning of the object such that they do not know what it is anymore. In other cases, patients might have “lost” the word, but they know exactly what an object is. For example, a patient might say, “I cut herbs with that in the kitchen. I use it to cut my daughter’s hair, but what is the name? I can’t get to it.” Dr. Schnur is more concerned with the semantic side of the issue. To investigate this, Dr. Schnur plans to study the semantic distance between the word that the patient wants to say and the word they say instead. Dr. Schnur described the phenomenon as the following:

The word that they produced instead of scissors was either close in meaning or really far away. If it’s really far away, it suggests that they don’t have that meaning anymore. But if it’s close, like they say “knife” instead of “scissors,” you know that they’ve got some of that concept there, potentially…So, they were supposed to say “scissors” and they said something else. How close was it to “scissors”? … people can make errors when they’ve lost the meanings of things, and so if they’ve lost complete meaning, maybe they’re really far from the target, but if they have some residual meaning, then the error they might make might be closer to the target.

Linguistics research such as Dr. Schnur’s has significant ramifications on another field, natural language processing (NLP). Intimately related to artificial intelligence and one of the biggest applications of machine learning, NLP lies at the intersection of linguistics, computer science, and information engineering with an end goal to program computers to process, analyze, and eventually produce natural language (such as conversations and written text). Similar to how neural networks are loosely based off of our own neural circuitry, much of the inspiration for the design of NLP programs is based off of the way humans are able to learn and understand language.

Dr. Schnur’s research can guide us closer and closer to unearthing the circuitry of our own minds and developing such a system that will be able to replicate our language and speech abilities. Many leading figures in NLP believe that the biggest open problems in the field are all related to understanding. The end goal is to create a system that is able to read just as humans do, but how we get there is far from clear. The two leading theories at the moment are innate biases and learning from scratch. Many NLP researchers believe that we should encode some sort of common sense and knowledge base into the NLP program before training it to process natural language, while other researchers who side with reinforcement learning believe that we want the model to learn everything from scratch. Dr. Schnur said, “if you want to generate a system that produces speech, like a computer program that can produce speech, and you want it to work very quickly, then maybe one of those human parameters is that we’re able to produce these words quickly because we can retrieve the meaning because all the things that are similar in meaning are grouped together.” Perhaps we are closer to reaching this goal than we think. One thing is for sure: the future of NLP lies in the midst of today’s linguistics research.

For more information on the biggest open problems in NLP, see
http://ruder.io/4-biggest-open-problems-in-nlp/


Highlights from the interview:

What led you to pursue your career as a researcher and to study cognition behavior in college?

My father is a physicist, and my mom is a linguist. So, I tried to cross those two interests, basically, and it led me to the field of psycholinguistics.

That’s the study of how language works. I did science fair projects in high school, and then I enjoyed doing the research, and I wrote that stuff up. I was a Westinghouse House finalist (now known as Regeneron Science Talent Search) so that made me think, oh I might be good at this, and I really enjoy doing it. I think I did three years of science fairs and science projects, then by the time I got to college, I thought, okay, I want to marry this interest I have in linguistics and how language works with something about the brain—about the biology.

Then I did research every summer in college. I worked with a neurolinguist who got his PhD from MIT, was at Massachusetts General Hospital. I worked with him every summer, and that also furthered my interest in wanting to pursue cognitive science.

Do you think your family really acted as mentors who helped you cultivate your interest, or was there someone else in particular that spurred on your career?

Both my parents. My mother was the one who introduced me to some old literature in the study of language and how we process sentences, from the ’70s, that was a little bit related to her thesis work. Then it was my father who gave me this idea about the scientific method and statistics and the way to approach the questions. You can have a question like, “How does the brain work?” or “How does the mind work?” but you need a rigorous path about how to approach that question. I was only in high school, but he got me started on it.

Could you tell me a little bit more about the research that you conduct?

My primary interest is in language production. So, we produce two to three words per second—of a spoken vocabulary of about 40,000 words—and we make a mistake only once in about a thousand words. We never think about this ability, that we’re able to generate this speech so efficiently. Occasionally we make mistakes, but, for the most part, we’re able to do it. However, it is still unclear what kind of information is required to produce language. Humans are the only ones with this ability.

It might appeal to you from a computer science perspective. The question I have is what information is in the system in the first place? How is it organized? How do you access it in order to use speech? Language production is the part that I’m particularly interested in, and I’m interested in the theoretical way of how any human is geared up after birth to learn language and then use it, and they become proficient, totally proficient, by age 10.

Then the other part of my research is thinking about some people who have brain damage, as they are not able to produce speech fluently any more. They have problems. So [we look at] what is it with the language system that they have problems with? From a theoretical perspective, if you take a hammer to a system like a stroke does, you can find out what the rules are of the system when you damage it in certain ways. So, you can test a hypothesis, and it will tell you about the type of information that is stored. For example, a while back, the person I did my PhD thesis with had a paper in Nature [a scientific journal]. [In that paper, they discuss different patients:] One who had lost the ability to produce verbs, and the other person lost the ability to produce nouns. What that demonstrated was that information was stored in different parts of the brain, so for whatever biological reason, the way we use object names and action names needed different biological circuitry in order to do it. I thought that that was really fascinating. I thought it was the coolest thing, that you could have this information that you never think about, and it could get damaged, so you have a deficit of a problem producing one type of word versus another. But that’s an example of where you can look at the damage and it tells you something about the system. This was before there was functional neuroimaging. I mean we had PET, but it wasn’t as good at that time. Now we can use fMRI and design experiments—get people to think about different kinds of objects or do things with language and then see which parts of their brain are involved.

So, you ask them questions and then see different parts of the brain light up?

You have them do tasks like language tasks. One of the earlier studies was how people name pictures of objects and then pictures of actions. You see different areas of the brain respond to these different kinds of pictures when you’re producing the name.

Oh, wow.

The other part of my research—so one is just very theoretical, how does language work, language production—I’m also interested in how we understand speech, but my focus in my career has been language production. But now I also focus on people who have brain damage as a result of stroke. So here, from a scientific perspective, you can look at a stroke to try to understand how language is organized, but also from a clinical perspective, you can try to think about ways to help people recover.

How does this help you understand how to help people recover?

For example, in some of my work with a colleague at Rice University—her name is Randi Martin—we were interested in whether one of the capacities to help you produce words in multiword speech, like in conversation, was working memory. Whether your ability to hold onto information, short-term memory, whether that would help you produce longer strings of words. We published a recent paper that demonstrated that. That yes, indeed, it’s not a language faculty, but you have this other cognitive capacity, working memory, that seems to help your ability to produce longer phrases of words. Groupings of words. Like the “big red dog” as opposed to just “the dog.” In order to produce a longer phrase…”I ate that very yummy bowl of noodles” instead of “ate the noodles,” working memory helps you to do that. And so, a possibility for recovery is that if someone is having problems producing multiword speech, instead of focusing on the language side, maybe try to rejuvenate or bring back their working memory. And by bringing back their working memory, you might get back their language.

Could you give me a little bit of background on how your research impacts natural language processing (NLP) research these days? So, a lot of the researchers in deep learning and machine learning are really focusing on the applied area of natural language processing. I would like to know if some of your research ties along with that.

I don’t do anything in natural language processing. But we do use deep learning algorithms, but the applications I could talk to you about are on the clinical side, this research side with stroke patients.

One of the projects we’re working on right now is to try to assess the degree to which someone has a problem accessing the meaning of things. After they have a stroke, you can have multiple problems in producing a word. You could not be able to produce a word for multiple reasons; one reason might be that you’ve lost the meaning of it. This sometimes happens, for example, with advanced dementia cases. You look at a pair of scissors and you say, “I don’t know how to use that anymore. I don’t know what it is.” They’ve lost the meaning of the thing, so they can’t even say scissors because they don’t know what it is anymore. Another reason you might not be able to say “scissors” is that you know exactly what it is, but you can’t get to the word. You say something like, “Oh, I cut herbs with that in the kitchen, I use it to cut my daughter’s hair, but what is the name? I can’t get to it.” Another reason might be that you can’t get to the sounds. You know what it is, you say, “Oh, it rhymes with this other thing. It has two syllables. It starts with an ‘s.'” Or someone might have problems with just moving their mouth. They can’t get to the motor programming to move their tongue to say the word “scissors.” They might go, “S-s-s-s.”

All of these might be reasons you can’t produce a word. So, we were interested in the meaning side. The semantic side. And if could we assess the degree to which, when they made a mistake naming an object, it was because something about the meaning had been messed up. The word that they produced instead of scissors was either close in meaning or really far away. If it’s really far away, it suggests that they don’t have that meaning anymore. But if it’s close, like they say “knife” instead of “scissors,” you know that they’ve got some of that concept there, potentially. So, we were using something called Word2vec from Google, that Google makes publicly available, which shows that out of huge ranges of text, the probability of which two words occur in a large sequence of text. And the idea being that if two words happen to occur in that paragraph, they probably share some meaning. Or in that word string. But if those two words are very far apart, then they are probably not in much correspondence, because out of all of this text, people never refer to those two things in the same breath. I use the word breath, but it’s over a sequence.

We took all the naming attempts from these patients, all the errors that they made, because we wanted to know how close semantically, in meaning, was the word that they produced to the target when it was not the right word. So, they were supposed to say “scissors” and they said something else. How close was it to “scissors”? So, we put it into Word2vec and said how close is “knife,” close in semantic distance, to “scissors.” And it gives back a number. We can do that across everything that particular patient said, and we can compare it to somebody else. And then we can say this person is very close on target. They tried to say “scissors,” but they didn’t quite get there, but they got a word that was close in meaning to the intention of the thing they were supposed to say. But somebody else was really far apart, like the words they said never occurred in those strings the Google database had.

That’s an example where they constructed these probabilities of how often these words occur, and we can use that information to get back to the brain and say okay, does this describe the semantic space that someone might have in their head? Because remember I said people can make errors when they’ve lost the meanings of things, and so if they’ve lost complete meaning, maybe they’re really far from the target, but if they have some residual meaning, then the error they might make might be closer to the target.

Oh, I see.

Oh, you know, cut. I don’t know the name of it, but it cuts. Right? Like they say “knife” instead of “scissors” or maybe “scissors” and “cut” occur very close to each other in a paragraph, so you know they have some meaning to it.

People have already shown this to a degree, that the errors that we make when we produce speech, we tend to—if we slip a word up, it tends to be related semantically. So, if you said, “Oh, I took my dog—I mean my cat—for a walk.” Those errors tend to be semantically related, so if you’re trying to generate an artificial system that produces language, one of the parameters you might use to organize that information is the degree to which things are related in meaning.

And so if you want to generate a system that produces speech, like a computer program that can produce speech, and you want it to work very quickly, then maybe one of those human parameters is that we’re able to produce these words quickly because we can retrieve the meaning because all the things that are similar in meaning are grouped together. So, this is a way you can sort of figure out what the rules are.

 

Interview excerpts have been lightly edited for clarity and readability and approved by the interviewee.

 

 

]]>