Google is tackling some of the most profound questions generated by the burgeoning field of AI head on with the launch of a new group to draw together academics, technologists and charities with the aim of chewing over the ethical and societal implications of creating sentient software.
DeepMind Ethics and Society is being spearheaded by DeepMind, Google’s London-based AI division, and will be formed from a diverse mix of figures including AI professor Nick Bostrom, climate change campaigner Christiana Figueres and Columbia development professor Jeffrey Sachs.
Explaining the need for such a grouping the team wrote: “As history attests, technological innovation in itself is no guarantee of broader social progress. The development of AI creates important and complex questions. Its impact on society—and on all our lives—is not something that should be left to chance. Beneficial outcomes and protections against harms must be actively fought for and built-in from the beginning. But in a field as complex as AI, this is easier said than done.”
AI is increasingly taking front and centre stage in our world governing everything from spam filtering to Netflix recommendations, offering marketers the opportunity to crunch more data more accurately to tailor and target campaigns.
The initiative represents something of a PR push on the part of Google as it seeks to put distance between it and a scandal involving the Royal Free hospital in which it introduced its AI powered Streams app to aid patient diagnosis without first informing the public about what information was being shared and how.