(COLOMBO, LANKAPUVATH) – An interview with the force behind Seeing AI, the talking camera app for people who are blind or with low vision.
By Geoff Spencer (Microsoft Asia Writer)
Saqib Shaikh lost his sight at the age of seven, fell in love with computers as a schoolboy in Britain and grew up to become a top software engineer with an inspirational mission.
Standing at the intersection of artificial intelligence (AI) and inclusive design, he believes we can create intelligent machines to empower millions of people around the world with disabilities to achieve more and live enhanced lives. The knowledge gained from targeting and solving the problems of those with special needs, he says, can only drive technological innovation that benefits everyone across society.
Saqib has had a lifelong relationship with advancing digital technology. At a school for children who are blind or with low vision, he learned self-reliance and developed a burning sense of curiosity. As a 10-year-old, he was given a rudimentary talking PC, and that led him to learn how to program. His intellectual romance with computer science blossomed at university where he doggedly overcame all sorts of day-to-day challenges on campus to graduate top his class with a master’s degree in AI.
His quest nowadays is to create greater accessibility and inclusion – to level the playing field for everyone. As the driving force behind Microsoft’s Seeing AI project, he is exploring how AI can enable people who are blind or with low vision to achieve more with freedom and confidence.
His team launched the Seeing AI app in 2017, giving those who cannot see a new way to understand the world through the cameras on their smartphones. Since then, it has helped customers with more than 10 million tasks. A user merely points his or her phone, and the app vocally says what it sees. It might be in a room, on a street, in a mall, or an office – customers are using the app in all sorts of situations. With facial-recognition technology, the app can name friends and acquaintances, describe physical appearances of people and even predict their moods. It can read printed text in books, newspapers, menus, and signs aloud. It can even identify banknotes.
Saqib currently works and lives in London. We caught up with him on a recent visit to Singapore where he told audiences about how technology has helped him realize his potential and how it promises to improve the lives of everyone – and not just those disabilities.
“There are a lot of problems. But for every problem, there is a solution,” he says.
Q: How has your time been at Microsoft?
I’ve been with Microsoft for 13 years. It’s been a fantastic experience, with the opportunity to impact so many peoples’ lives for the better.
Q: How has this been affected by your visual disability?
This isn’t really something I think about. From the start, I’ve been fortunate to be surrounded by supportive colleagues. My not being able to see just fades into the background. Whenever a problem pops up, you just find a solution.
Q: Tell us about cultural transformation within the company and how it is encouraging inclusion and accessibility.
We’ve definitely seen a change in recent years. While there’s always been this passionate group working on accessibility at the company, we’ve seen this become part of the development process, rather than something special. There is a recognition that we should proactively find and hire people from all walks of life so that all our products can truly become inclusive. And the hackathons, like the one Seeing AI started at, have provided a great opportunity for employees to explore their passions and sometimes even create new features for our products.
Q: How does it feel to be an ambassador for the power of technology and inclusion?
I think I like it, though I have never really thought about it in such grand terms. If I can spread a message that enables our customers to think differently about disability – about being more inclusive – then, I am happy. If I can inspire or influence people in that direction, that’s great.
Q: How did you get interested in computers?
When I was about 10, I got to use a talking computer at school – I loved the independence it gave me. And, that led me to learn how to program, and the rest is history.
What’s really exciting is that looking at kids that age today, they have assistive technologies included in most devices, and with the internet, you can access so much information.
Q: Where did the concept of Seeing AI come from?
The first I remember thinking about this idea was way back at university more than 15 years ago. We had ideation sessions in the dormitory. We’d say things like “okay, we should make a pair of glasses with a camera on it that can look around at everything and describe it out loud.”
Back then, we totally couldn’t do that. But in 2014, we had our first Hackathon at Microsoft. And, I thought again about that old idea. The first prototypes were just rudimentary. They did facial recognition and a few other things.
But then we started working with the great scientists at Microsoft Research (Microsoft’s research and development arm). The technologies and the algorithms through deep learning got better and better. It came together around that time with cloud computing. Eventually, we got to the point where a computer could at least attempt to describe what is going on in a photo. That was the real breakthrough.
Things are still progressing. We haven’t yet realized our dream. But we are getting closer. It’s all about identifying a need and then weaving together the technologies to build solutions.