+

Cookies on the Business Insider India website

Business Insider India has updated its Privacy and Cookie policy. We use cookies to ensure that we give you the better experience on our website. If you continue without changing your settings, we\'ll assume that you are happy to receive all cookies on the Business Insider India website. However, you can change your cookie setting at any time by clicking on our Cookie Policy at any time. You can also see our Privacy Policy.

Close
HomeQuizzoneWhatsappShare Flash Reads
 

I'm an AI ethicist. I make sure the tech is safely deployed to the world, but I am not an oracle.

May 13, 2023, 18:24 IST
Business Insider
Here is what Giada Pistilli's job as Principal Ethicist at Hugging Face entails, as told to Insider's Aaron Mok.Courtesy of Giada Pistilli
  • Giada Pistilli, 31, is the principal ethicist at Hugging Face and helps ensure AI is safely deployed.
  • The main question that drives Pistilli's work is: How can the public use AI for good?
Advertisement

This as-told-to essay is based on a conversation with Giada Pistilli, a 31-year-old based in Paris, France, about her job as a principal ethicist at AI firm Hugging Face. The following has been edited for length and clarity.

I'm a full-time AI ethicist making sure the tech is safely deployed to the world.

But don't be fooled: I am not an oracle.

In short, I work at the intersection of ethics, policy, and law to push ethical frameworks around AI inside Hugging Face and the public. The main question that drives my work is: How can the public use AI for good?

I used to work as a policy advisor for the European Parliament on human rights issues. After stepping down, I pursued my masters in political philosophy and ethics at Sorbonne Université, and later, my PhD in philosophy at the same school, where I'm now wrapping up my thesis on the ethics of conversational AI.

Advertisement

Once I completed my master's program, I worked as a research engineer at a Paris-based chatbot company where I worked with designers and machine learning engineers on applied ethics research. I later joined BigScience — a project that is part of Hugging Face, an open source AI and machine learning resource platform — to make the GPT-language model open to the public.

I first approached Hugging Face to understand how I could help build Bloom, its large language model. I collaborated with the firm as part of its legal and ethical scholarship working group, then got hired as its full-time principal ethicist three months later in May 2022. I've been working there ever since.

60% of my time is spent doing research which includes reading academic papers on ethics and technical AI studies, as well as writing my own papers on topics such as ethical charters — a company's ethical guidelines its employees are expected to meet. The rest is spent on collaborations.

I'm part of a team called "ML and Society" where I meet with research scientists, a legal counsel, and a policy director once a week to research and discuss tensions in AI, such as bias in models, safe AI applications, and public policy.

Outside of the group, I help with internal projects like writing ethical guidelines for Hugging Face's diffusers library.

Advertisement

I also provide ethics advice and guidance for external projects, like bringing Stable Diffusion to our platform and researching how to safely deploy AI in the healthcare space with external collaborators.

First thing in the morning, I spend 30 to 45 minutes checking Hugging Face's latest content moderation reports. After that, I conduct research or work on new papers until lunch time.

Around noon, I assist my colleagues on specific projects, such as updating Hugging Face's content moderation policy. Recently, I advised a healthcare client on how to deploy AI ethically by asking guiding questions, like whether the client has the right licensing or if it will make its models public.

Some days, I'm tasked with quickly addressing urgent situations on the platform, like heated discussions between users or cases of AI causing harm. The rest of my day is spent on my research.

The main challenge I face is when people, at times, perceive me as being the moral police who has the power to determine what is right or wrong. This perception can be dangerous because I don't have all the answers. The more I study ethics, the less I'm sure about what I know.

Advertisement

For example, a journalist approached Hugging Face after he claimed one of his articles was plagiarized by the company's language model. Of course I was happy to help, but it's a tricky situation to address.

It's also not fair to be put on the spot when something bad happens. Just look at how big tech firms have treated their ethicists — it's been carnage.

Philosophers — and in general, all humanities — are very much needed as AI advances. Ethicists make sense of complicated questions around our humanity. They also help project — not predict — an ideal future for society so it's closer to a utopia than a dystopia.

I hope the tech community takes the role of AI ethicists more seriously.

You are subscribed to notifications!
Looks like you've blocked notifications!
Next Article