I attended TechUK’s Digital Ethics Summit this week and there were some amazing talks and discussion points. There were themes that emerged during the day that I believe will become hot topics over the next 12 months. Here I take a dive into three key takeaways.
Good data makes AI smarter
The growing trend of adopting Artificial Intelligence and Machine Learning across many areas of a business is helping organisations innovate and keep up with our increasingly digital economy. However, the success of such projects hinges on the accuracy and integrity of the data fed into these systems. Especially when decision-making of the back of the results is also automated.
At the Summit we heard examples of AI projects that would have had incredibly positive results and secured vast cost-savings…had the results been based on accurate data. As it was, they weren’t. Rather than success stories, we were hearing about organisations facing legal action and massive financial losses.
Whilst they can, and do, deliver fantastic results, businesses must exercise caution when undertaking digital transformation and utilising powerful AI software. The first question when using data strategically or innovatively must always be – can we trust our data? Having confidence in data and its accuracy is key to the success of digital innovation and can ensure AI projects deliver smarter and more actionable results.
Chat to us about how we can improve the accuracy and quality of your data to enable your AI and transformational projects.
Consumers don’t trust businesses with their data
What is becoming more apparent is consumers overall don’t trust businesses with their data. Concerns about misuse of their data, regardless of whether it’s happening or not, can erode consumer trust and damage the business-customer relationship. When organisations rely on citizen consent to use their data to power personalisation or other data for good projects, this mistrust can become a significant blocker to innovation and growth. This is reinforced perhaps most recently by the facial recognition technology used at Kings Cross and calls into question the ethics of using citizen data without their knowledge and consent.
So, what can businesses do about it? Building consumer trust needs to be developed from two angles. The first is by entering an open dialogue with citizens; increasing transparency on how their personal data is being used and the benefits of using the data for both the citizen and other members of the public. The second is when customers do give consent, or indeed if they don’t, to ensure that their permission is respected. Organisations need to make sure that changes to consent are responded to across the business and agency alike. This can only happen if there is a joined-up view of consumers and their consent data that is organisation-wide.
Chat to us about how we can give you an accurate and up-to-date view of your customers consent.
No ‘absolute’ Right to Privacy
A discussion point raised by Chief Superintendent Chris Todd from West Midlands Police was about the hierarchy of our Human Rights. Whilst Right to Privacy has driven data privacy legislation such as GDPR, there will be certain circumstances when it competes with other rights, such as the Right to Life. This will be especially pertinent in public sector organisations like the police. For example, if someone is engaged in criminal activity, their Right to Privacy may be superseded by citizens’ Right to Life.
Right to Privacy was indeed confirmed by the ICO at this session as not being ‘absolute’ in the sense that circumstances do exist where it may be superseded. What these circumstances are though is what has not yet been discussed or defined. Some may be black and white but, as with all ethical discussions, there will be those that are shades of grey. We’re not at a stage yet where there is a definitive answer or guidance on this, but I feel this will be a recurring topic to be unpicked and debated as we move into 2020.