Generative Data Intelligence

Reflections from ethics and safety ‘on the ground’ at DeepMind

Date:

Boxi shares their experiences working as a program specialist on the ethics & society team to support ethical, safe and beneficial AI development, highlighting the importance of interdisciplinary and sociotechnical thinking.

What led you to DeepMind?

I grew up in suburban Perth, Australia, and still remember using the internet for the first time at the local library in my early teens to research the Great Barrier Reef. Looking back, I could have never imagined the role I have now! 

I was always fascinated with how social systems could interact with technology, which eventually led me to study political science and urban planning at university. I went on to work in urban policy research and strategy consulting, and a few years later, I made the daunting decision to move from Australia to London to work for a health technology startup. Seeing the implementation challenges of AI in medicine piqued my interest in the ethical and societal impacts of AI. When the opportunity to apply for DeepMind came up it made a lot of sense to me – a mix of the academic inquiry that I missed, and the exciting energy of a startup.

How would you describe the culture at DeepMind? And your team?

DeepMind’s culture is one where many perspectives collide and there are countless chances to find your community. In many cases, this happens formally – I’ve joined employee resource groups like QueerMinds and the People of Colour Employee Group, and have had the opportunity to attend conferences like Lesbians Who Tech. I’m also part of Queer in AI, an organisation with a mission to raise awareness of queer issues in AI/ML, foster a community of queer researchers and celebrate the work of queer scientists. DeepMind are actually hosting a workshop with Queer in AI on Wednesday the 6th July to discuss the relationship between queer issues and AI. Since joining I’ve made some really great connections with a circle of queer and POC colleagues amongst whom we create safe spaces and support each other’s work. These interactions have felt even more rewarding now that we have returned to the office.

My team (the Ethics & Society team) is busy and close knit. We work together to guide the responsible development and deployment of AI. One core element of this is developing the processes, infrastructure and frameworks to ensure ethical considerations are embedded into all of our projects. We often partner with other teams at DeepMind over extended periods to consider the positive and negative downstream impacts of our work, i.e our work with language models, or science projects like AlphaFold. 

We’re constantly learning as a team, talking to each other about our projects and the challenges we’re facing. This involves a lot of reflection to fully understand who our technology may impact, and to determine who the right people are (internally and externally) to help tackle the challenges identified. This can be incredibly complex at times but it is a lot of fun.

What does a typical day look like?

I lead our research collaborations which operationalise ethics and safety across our work at DeepMind. This normally includes conducting ethical impact assessments, partnering with teams to conduct ethics reviews, and facilitating workshops to think through benefits, risks and mitigations.

As we work across varied domains of research (e.g. language, reinforcement learning, robotics), we are always in conversation with experts across research, engineering, legal, policy, and communications, etc. We also meet as a team daily – this is super important in our area of work as ethics and safety questions are best discussed in groups (in order to check personal biases and debate differences of opinion, for example).

What gets you most excited about your role?

I love learning from those around me – internally and within the wider AI ethics community. There is still so much more to learn and I am humbled by the knowledge and curiosity of everyone I speak to. What I find particularly interesting is learning from those that are adjacent to – or outside of traditional AI/ML research fields. Better understanding the perspectives of those, for example, in social sciences, philosophy, or critical theory allows us to better identify and challenge the fundamental values underpinning technology.

A great example of this would be this year’s ACM Conference on Fairness, Accountability, and Transparency (FAccT) in South Korea. The agenda at this conference was far reaching, covering everything from my colleagues’ paper on fluid identity in machine learning, to a session with Youjin Kong on AI ethics and feminist philosophy, and a keynote by Karen Hao on journalism on AI ethics and technology.

FAccT, South Korea.
Why is this area of work so important?

In a recent blog post, our COO Lila discussed the idea of pioneering responsibility and its key role in our mission. I think that’s exactly right – not only is it critical for the wider tech community but it’s especially important when it comes to creating powerful, widespread technologies like artificial intelligence. It must be part of the conversation at every stage and embedded into everything that we do.

I’m proud that I am part of a team that gets to explore these ideas – and while of course we have much more to do in this space, I do believe we’re helping make a positive impact on the world around us. 

Any tips for someone looking to get into a similar role? 

Read as much as you can about AI ethics and safety, and better yet, explore sociotechnical work that discusses the history of AI, the current harms of technology within society, and visions for what safe and ethical AI could look like. Some favourites of mine are Ruha Benjamin’s Race after Technology and Karen Hao’s recent series on AI and colonialism. I would also recommend checking out Kevin Guyan’s Queer Data: Using Gender, Sex and Sexuality Data for Action, and my colleagues paper on algorithmic fairness for the queer community.

Finally, I want to reassure ‘non-technical’ or more social science oriented folks that this space is for you. I often have people tell me they feel intimidated by AI/ML despite having an interest in technology and ethics. Please be assured that your perspective will be valuable to this industry – our values influence technology, as does technology impact our social life. Addressing the challenges of AI development will require interdisciplinary and sociotechnical thinking and people from all walks of life. Don’t doubt yourself – go for it!

spot_img

Latest Intelligence

spot_img

Chat with us

Hi there! How can I help you?