Years ago, Ian DeAndrea-Lazarus, a PhD student at the University of Rochester School of Medicine, was in a car accident in Washington D.C. Another driver had sped through a red light, colliding with his car and sending it spinning. DeAndrea-Lazarus suffered whiplash. When emergency personnel arrived he was put in a stretcher and transported to a nearby hospital. But DeAndrea-Lazarus is deaf: having his neck immobilized eliminated his ability to communicate. “It was incredibly frustrating,” he recalls. “I had to tell the EMTs to come within my field of view if they wanted to talk to me.” When he arrived at the hospital, it took hours for an available American Sign Language (ASL) interpreter to arrive. “This is a common problem for deaf people everywhere,” he laments.
It was frustrating healthcare experiences such as this that pushed DeAndrea-Lazarus, 30, towards medicine. He wanted to make the process better for people like him. Historically, the medical system has not been good for the deaf, resulting in misdiagnoses, poorer health outcomes, and healthcare avoidance. Among non-English speakers, the deaf are at the greatest risk of being misunderstood by healthcare providers. And it is not just from issues with sound. Physicians rely on written English, even though studies show that the deaf community have, on average, lower literacy rates than the hearing; the average deaf high school senior reads between the third and fourth grade levels. Moreover, physicians often view deaf patients strictly in terms of their deafness, looking to “fix the ear,” says DeAndrea-Lazarus, even though “many deaf people don’t view themselves as disabled. They consider themselves to be a part of a linguistic minority. It is the environment that is disabling,” he says.
Historically, the medical system has not been good for the deaf, resulting in misdiagnoses, poorer health outcomes, and healthcare avoidance
Slowly, this is changing. Certainly it has in Rochester, which has the highest per capita concentration of deaf Americans. There, DeAndrea-Lazarus helps run an annual role-reversal program called Deaf Strong Hospital, where hearing students take on the role of patients in a hospital with only ASL-communicating doctors. “Many of my classmates have learned ASL as a result and have reported having positive interactions with deaf patients during their clinical rotations,” says DeAndrea-Lazarus of the program’s impact.
Technology is also having a profound effect on deaf-hearing communication. Currently, DeAndrea-Lazarus is working on a potentially revolutionizing technology: an app that pairs with Microsoft HoloLens, an augmented reality visor, that translates spoken English into text onto the eyeglasses. When released, it will allow deaf people to understand spoken language almost seamlessly, in real time and in the real world. We reached out to hear more.
Why did you get into medicine?
The deaf community is vastly underrepresented in medicine and I saw this as an opportunity to show the world that deaf people, given the right tools, are capable of doing anything they set their mind to. I have also had adverse experiences dealing with healthcare providers who were unfamiliar with the needs of deaf people. I’ve been told that I needed to bring or pay for my own interpreter, for instance, which is a violation of the Americans with Disabilities Act.
The deaf community is vastly underrepresented in medicine and I saw this as an opportunity to show the world that deaf people, given the right tools, are capable of doing anything they set their mind to.
How beneficial is the Deaf Strong Hospital program for medical students?
Deaf Strong Hospital is an opportunity to teach first-year medical students what it is like to experience communication barriers firsthand. All of the students will experience the consequences of misunderstandings and ineffective communication such as being referred to the psychiatrist for an unspecified mental illness, which is something that has historically happened to many deaf people. The students are also given an hour long lecture by me on deaf culture and disparities in healthcare.
I also give examples of “deaf utopias” around the world such as Martha’s Vineyard where there was a high prevalence of hereditary deafness that resulted in the entire community learning sign language, regardless of them being deaf or hearing. People living on that island were no longer disabled by the environment as there was no communication barrier between the deaf and hearing people. I had that experience at Gallaudet, the only deaf university in the world, where I often forgot that I was deaf as everyone around me, deaf or hearing, used ASL to communicate.
The reaction from the medical students has been overwhelmingly positive. This is also part of why medical school has not been as challenging as I thought it would be. This program should definitely be implemented in every medical school curriculum. Rochester is not the only city with a large deaf population. There are other cities with a high prevalence of deaf people such as Austin, DC, Fremont and Pittsburgh. It is my hope that one day the world will be as accessible to every deaf person as Martha’s Vineyard once was.
How did your idea for the app come about?
As a child, I enjoyed science fiction films such as Minority Report. I imagined a world where we could see text translations of speech occurring around us. The advent of wearable glasses took us closer to making that dream become a reality. A few years ago I obtained Google Glass with funding from the graduate program here and developed a system where I could see real-time captions appearing in my field of view. The captions were being produced by a professional captioner who was listening to an audio feed. The next step was to utilize speech-to-text software to cut out the middleman. Microsoft created the HoloLens, which had voice recognition built-in so I obtained one and developed a simple app that tapped into this ability. The speech-to-text software is pretty good but not nearly as accurate as it needs to be to be used in environments such as medical school where terminology is highly specialized.
Microsoft created the HoloLens, which had voice recognition built-in so I obtained one and developed a simple app that tapped into this ability.
What sorts of responses have you had?
I exhibited my app during a talk I gave during the Association of Medical Professionals with Hearing Losses (AMPHL) conference here in Rochester in 2017 and the reaction was very positive. There was a lot of excitement about the app’s potential to eliminate communication barriers, especially in medicine. However, the HoloLens is somewhat bulky and intrusive so some feel that it presents a physical barrier between the wearer and the speaker. With Moore’s Law in mind, the hardware will become smaller and the computing power will be greater in no time.
Do you have any other tech ideas in this realm?
My vision for the future is to tap into the brain’s existing language foundation and cut out the middleman again, which is the wearable device in this case. I imagine a world where we can communicate directly with each other by sending pulses of activity in our brain’s language regions to each other. This would be ideal as this would tap into anyone’s natural first language, instead of mandating that every deaf child learn a spoken language that is not fully accessible to them. Deaf children would be able to acquire a language that is fully accessible to them, visually, and communicate in this language to someone else who may not know this language and still understand them.
If you really think about it, the ear is also a middleman, receiving acoustic signals from the environment and sending it to the brain via electrical pulses. Why don’t we skip the ear and go to the source: the brain? Our brains do not really care whether we receive language through the ears or the eyes. The same pathways are activated in the brain.
My vision for the future is to tap into the brain’s existing language foundation and cut out the middleman agai
What are some other ways in which tech is improving things for deaf people?
There is a ton of technology in general that has been helpful for the deaf community.Hearing aids and cochlear implants give us access to a certain amount of sound. In the United States and now Canada, videophones have given us the ability to make phone calls, whether it is to a deaf person or a hearing person, via Video Relay Services. We are able to use this service on our smartphones via several apps such as Sorenson VRS, Convo Relay, Purple VRS and others. I’ve also found my Apple Watch to be very useful as an alarm clock to help me wake up in the morning (no more relying on bulky, vibrating alarm clocks!). My Ring doorbell communicates with my Philips Hue lightbulbs to let me know when someone is at the door. My Nest Protect sends me a notification and turns my Philips Hue lightbulbs red when there is too much smoke in the kitchen.
My wife, who is also deaf, and I have a three-month old son and we love our Lollipop baby camera, which sends notifications to our iPhones and my Apple Watch whenever my son is crying or moving in his crib. Technology has truly been a friend of the deaf community even though it gives us a hard time sometimes. I think we are moving closer to connecting the entire world, deaf or hearing, as technology advances.