There is no doubt that the recent breakthroughs in the cost and performance of advanced computers will accelerate the transition of AI from research labs to industries and security industry will certainly continue to be of its main destinations.
Simple algorithm-based AI has been used in surveillance for a considerable time so that using license plate tracking to follow vehicles and facial recognition in controlled settings are now regarded as a solved problem for all practical purposes.
More advanced AI, however, opens doors and new possibilities by bestowing on surveillance systems, which we normally think of as passive digital eyes, digital brains to instantly analyse video feeds and pre-emptively instigate a set of counteraction measures even before a wrongdoing has been committed.
The application of AI in surveillance is versatile – from a technology of notifications of fights in educational institutions, looking out for groups of people clumping together and then alerting a human supervisor, who could check the video feed to see what’s happening or head over in person to investigate, to more exotic applications, such as “animal species recognition” AI.
Whilst this is certainly welcome news for many people, it raises serious questions about the future of privacy and ethical application of AI. What if AI operates on a biased algorithm or misinterprets pre-set behavioural patterns due to grainy quality or distance to the object of surveillance? This is a potentially a serious problem for the industry and poses questions to social justice as studies have shown that machine learning systems soak up the racial and sexist prejudices of the society that programs them — from image recognition software that always puts women in kitchens, to criminal justice systems that always say black people are more likely to re-offend.
The gravity of the problem has recently been recognised in a statement made by Google CEO Sundar Pichai speaking at the Google I/O conference in Mountain View, California, May 8, who pledged that “Google will not allow its artificial intelligence software to be used in … unreasonable surveillance efforts”.
We would be interested to know your opinion – with the current level of technological development is AI a way forward or a step too far?