Oakland

Are we really prepared for AI?

At the end of last year, we visited Big Data London and took the opportunity to listen to Chris Wylie – infamous for his role with Cambridge Analytica – talk about ethics in data science. He explained how unlike medicine or other professions, data science does not have a code of conduct that drives ethics in this field.

Discussions around data ethics really became front-page news in 2013, when Edward Snowden published documents revealing the extent of surveillance operations conducted by various national intelligence and security organisations. These revelations began a debate that reignited following the Facebook and Cambridge Analytica data scandal, which involved the collection of personal data for the purpose of influencing election votes.

We now live in a time when we can no longer speak about data without also considering Artificial Intelligence use-cases. In 2016, Microsoft demonstrated something that many had been predicting – AI can go wrong. In less than 24 hours, Twitter users were able to corrupt a Microsoft built Twitter AI-chatbot named TayTweets. TayTweets (@TayandYou) went from believing that “humans are super cool” to stating that it “hated feminists and they should all die and burn in hell”. This was as a result of Tweets asking TayTweets to repeat offensive phrases and effectively ‘teaching’ ideas which were unethical and offensive. This really raised the question of whether or not we are prepared for AI, and how ethics (or lack thereof) should be considered in the future of AI development. Would ethical boundaries have to become rules for artificially intelligent machines?

As a company that both works with large amounts of data and develops AI-driven applications, The Oakland Group must not only abide by UK Government and EU legislation and initiatives which govern the ways in which we can store and process data, but we take our values seriously and remaining ethically considerate is essential to our business.

EU and UK Legislation and Guidance

In May 2018, GDPR (General Data Protection Regulation) came into effect across the European Union; GDPR governs the principles and obligations that data processors within the EU must follow.

The UK government has gone further, with the development of the Data Ethics Framework. The Data Ethics Framework provides guidance for public servants to be data-informed and understand their responsibility within their fields. The framework is comprised of seven main principles and provides a Data Ethics Workbook which can be used to promote ethical working, including user need and benefit, responsibility, and transparency. Unlike GDPR however, the Data Ethics Framework is not a law that private companies must follow.

In addition, the UK Statistics Authority has created a National Statistician’s Data Ethics Advisory Committee to advise researchers on what access, use, and sharing of public data is ethical.

Private Companies

Not only have public organisations and collectives realised the importance of managing and using data ethically. Multiple private companies have also released reports on Data and AI ethics, and all of them explain one of the main problems: including ethics in processes and the existence of biases. Even though there is a large amount of awareness on this topic, processes suffer the lack of ethics committees in companies.

As an example, Google has some principles relating to data and AI. Google will never develop technologies that are likely to cause overall harm, for example the creation of AI-driven weapons. Similarly, Google states “It won’t gather or use information for surveillance violating internationally accepted norms”. The main objectives of Googles AI applications are that they provide social benefit, do not reinforce unfair biases (a fate that befell TayTweets), and are held account by the people.

However, one of the primary challenges within data ethics in a world where we are globally connected is this: what exactly are “internationally accepted norms”? If we compare the ethics principles stated by both public and private companies, they are very similar, but in a world which seems increasingly grey, the need for a set of global data science ethics principles, which could be used by the public and private sector alike to guide the development of AI applications has never been more important.

Following legislation is a starting point but having a robust data strategy and governance which tackles ethics is essential. We ensure that ethical consideration and respect for data is baked into everything we do.