This article is more than 1 year old

Unis turn to webcam-watching AI to invigilate students taking exams. Of course, it struggles with people of color

Plus: IBM shares its ML know-how in schizophrenia fight

In brief AI software designed to monitor students via webcam as they take their tests – to detect any attempts at cheating – sometimes fails to identify the students due to their skin color.

Products like ExamSoft are being used by colleges and other organizations to make sure students aren’t cheating when they take exams in front of the computer. These organizations have resorted to such measures since it’s impossible to cram large groups of students in one room and invigilate them right now due to the coronavirus pandemic. The algorithms pick up on things like glances to something off-screen – a sign that you're looking up the answers elsewhere or relying on notes – or if someone else gets behind the keyboard to take the test, and alert officials.

But these solutions tend to be unable to detect and monitor people of color, believing that the lighting is too low, for example. When a law graduate taking a New York bar mock exam, invigilated by ExamSoft, complained to the software maker that its AI couldn't identify his face, he was told to “sit directly in front of a lighting source such as a lamp.” Even when he did so, the software still failed to detect him.

“I am a brown person with a beard. I do not believe my features are particularly anomalous,” he told the New York Times. “I cannot imagine any larger disaster than spending the last four months of my life unemployed and uninsured during a global pandemic in order to study for an exam that I cannot take on exam day because of racist technology.”

Machine-learning systems usually fail to understand non-White people due to the training data involved or algorithms used to handle lighting and contrast.

A spokesperson for ExamSoft told the NYT “the vast majority of those who have attempted to complete a mock exam have successfully done so. We’re working around the clock to ensure a successful exam experience for all bar candidates.”

IBM offers its AI to schizophrenia probe

IBM says it will contribute to a US government-backed project to develop an early-warning system that can reliably identify and help treat people likely to develop schizophrenia.

Specifically, the goal is to "generate tools that will considerably improve success in developing early stage interventions for patients who are at risk of developing schizophrenia," according to Uncle Sam.

The five-year $99m program is part of a broader effort involving the US National Institutes of Health, the Food and Drug Administration, and various for-profit and non-profit companies, and dubbed the Accelerating Medicines Partnership (AMP).

“The IBM Research team will contribute its knowledge in data-driven artificial intelligence applications to brain imaging for neurodevelopmental and neurodegenerative disorders, as demonstrated in schizophrenia, chronic pain and Huntington’s disease,” it said this week.

“The team will also contribute its knowledge to analyze and guide the collection of language samples, based on the track record of successful application of Natural Language Processing approaches to predict onset of psychosis in the [clinical high risk] population.”

IBM will work with eggheads from the Harvard Medical School, Mt Sinai School of Medicine, Stanford University, and the Northern California Institute for Research and Education, we're told. Curiously, there's no mention of IBM on the AMP website, though Alphabet's Verily is, so we'll have to assume for now Big Blue's role is buried in a sub-project.

Yay, Toyota is building your robo-butler

Cleaning surfaces and putting away stuff is an annoying chore. Why not get robots to do it for you? That’s what Toyota wants to sell you on.

The Japanese automaker is testing a robot’s ability to wipe down kitchen counters and TV screens, as well as putting away tableware in dishwashers, across labs in the US, Wired reported.

Engineers at the Toyota Research Institute first trained robot grippers via simulation, where their movements were refined using reinforcement learning algorithms. Over time, the agents learned to get better at completing specific tasks. Once their performance was pretty good, they could attempt the same tasks in the real world.

That part, however, is much more complex and messy, and machines can be typically thrown off by tiny changes in lighting or background. Toyota has not set a deadline for building and selling its house cleaning droids. ®

More about

TIP US OFF

Send us news


Other stories you might like