Saqib Shaikh lives is blind, lives in London, and is a core Microsoft developer. He lost the use of his eyes at age 7. Saqib found inspiration in software development and is helping build Seeing AI, a research project helping blind or visually impaired people to better understand who and what is around them. The app is built using intelligence APIs from Microsoft Cognitive Services.
Pretty amazing that an app can use a camera to capture an image or a video feed, and using artificial intelligence, to analyze the scene and vocalize to the user what it sees. In this example this is being done for the benefit of a human user, but imagine what could be possible if one computer program is used to serve instead, another computer program as the user of the analysis. What might that make possible?
How might you or your organization make use of technology like this?
What direction do you think technology like this will take?
Sign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.