scorecardThe top 7 media people in AI
  1. Home
  2. tech
  3. news
  4. The top 7 media people in AI

The top 7 media people in AI

Kali Hays,Lucia Moses   

The top 7 media people in AI
Alexandru Costi, AdobeAdobe
  • The following media people made it onto Business Insider's 2023 AI 100 list.
  • The main list spans several industries from hardware to education.

Generative AI has already begun to radically change the media landscape.

OpenAI's DALL-E and MidJourney can quickly produce realistic images based on user requests.

GPT-4, ChatGPT and other AI services generate impressive answers, summaries and other text.

The following people, who made it into Business Insider's 2023 AI 100 list, are harnessing this technology for new media use cases, and monitoring GenAI's performance to ensure outputs are accurate, safe, and fair.

Joy Buolamwini, Algorithmic Justice League

Joy Buolamwini, Algorithmic Justice League
Dr. Joy Buolamwini, Algorithmic Justice League      Poet of Code

Dr. Buolamwini's research found that major tech companies' AI-driven facial recognition tools were deeply biased and inaccurate. In 2016, she founded the Algorithmic Justice League after her MIT master's thesis showed how the data sets used to train several facial recognition tools were "overwhelmingly" trained on images of lighter-skinned people and had huge gaps in their ability to recognize women of color accurately. Recently, she coined the concept of "the coded gaze," the idea that society will be looked upon by tools trained on data from the past that does little but promulgate existing biases. Buolamwini is now the "artist in chief" and president of the AJL and recently wrote a book "Unmasking AI: My Mission to Protect What is Human in a World of Machines."

Elham Tabassi, NIST

Elham Tabassi, NIST
Elham Tabassi, NIST      NIST

Tabassi wrote the first-ever AI Risk Management Framework for the US National Institute of Standards and Technology. The well-received framework explored the good and bad of AI and everything in between, and it offered practical guidelines on avoiding its risks. Its success led the White House to direct NIST to form a new AI working group. Tabassi has worked at the NIST for nearly 25 years and last year became its first Associate Director for Emerging Technologies at its Information Technology Lab. Some of her earlier work for NIST focused on machine learning projects around biometric data. Tabassi also holds several leadership roles, including vice-chair of the Organisation for Economic Co-operation and Development's working party on AI Governance. She was educated at Iran's Sharif University of Technology and then Santa Clara University.

Chris Wiggins, The New York Times

Chris Wiggins, The New York Times
Chris Wiggins, The New York Times      The New York Times

The New York Times is the news industry's leading subscription success story, with nearly 10 million readers paying for print or digital access and subscription revenue that's surpassed advertising. Much credit goes to its chief data scientist Wiggins, whose team uses machine learning to figure out which readers will subscribe and who will cancel. This work was previously done by surveys. Wiggins also helped create software that predicted how readers would feel after reading an article, which is used for what the Times calls "perspective targeting" of ads. Wiggins, who became the Times' first person in the position in 2014, has several other roles, including associate professor of applied mathematics at Columbia and co-founder of the nonprofit HackNY.

Camille Carlton, Center for Humane Technology

Camille Carlton, Center for Humane Technology
Camille Carlton, Center for Human Technology      Center for Human Technology

Carlton describes her path to working on AI policy as "nonlinear." She studied international affairs and had a longstanding interest in how society could "realign capital with the things we really value," something she links to her upbringing by parents who fled Cuba as refugees. After seeing how divided her family became around the 2016 election, largely due to information they saw on sites like Facebook, Carlton was inspired to study technology and society. She now works at the Center for Humane Technology, advising policymakers on laws and regulations for AI. "I'm most scared about how these systems will further entrench deep power asymmetries and socioeconomic inequality," Carlton said."We're seeing a few large companies run by a handful of people shaping everything. We want something different."

David Evan Harris, UC Berkeley

David Evan Harris, UC Berkeley
David Evan Harris, UC Berkeley      UC Berkeley

Harris has taught courses on technology and AI at UC Berkeley and researched the intersection of democracy and AI for years. He recently worked on Meta's Responsible AI team but is now ringing the alarm on risks posed by open-source AI models. "It was great to influence that company from the inside," Harris said. Harris previously worked with the Institute for the Future to forecast AI's long-term impact on society, and he recently advised the White House and the European Commission on considerations for AI regulation. He is also a Chancellor's Public Scholar at Berkeley, a senior advisor for the Psychology of Technology Institute, an Affiliate Scholar at the CITRIS Policy Lab on the regulation of AI and social media, and a senior research fellow at the International Computer Science Institute.

Alexandru Costin, Adobe

Alexandru Costin, Adobe
Alexandru Costi, Adobe      Adobe

Costin is the vice president of generative AI and Sensei at content editing giant Adobe. He led the development of Adobe Firefly, its new suite of creative generative AI models. At a time when creators feared AI models would steal their work, Adobe stood out by promising Firefly would only be trained on content in the public domain or that Adobe had rights to through its popular stock image service, Adobe Stock. Adobe followed that by announcing it would pay bonuses to Stock contributors whose content is being used to train Firefly. The native Romanian was an entrepreneur who sold his web tools company InterAKT to Adobe. He ran Adobe Romania for 10 years before relocating to the US.

Aviv Ovadya

Aviv Ovadya
Researcher Aviv Ovadya      Aviv Ovadya

Ovadya was early to ring the bell on the potential dangers of AI. 2016 saw the start of an "infocalypse" and the rise of what was then known as "synthetic media," later called deep fakes and now generative AI. He has researched AI advancements, misinformation, and the impact of social media on society and democracy. He's a founding member of the Center for Social Media Responsibility at the University of Michigan and the Credibility Coalition, and he is a fellow at the Alliance for Securing Democracy. He's currently an affiliate with Harvard, where he's developing proposals around promoting information to bridge divides. Ovadya is also a visiting scholar at the University of Cambridge. "The ecosystems I'm trying to instigate and accelerate are about ensuring democracy can keep up with AI," Ovadya said.

Advertisement