Photo of Mark Chen

Artificial intelligence & robotics

Mark Chen

He’s teaching AI models new skills, from generating images to producing lines of code.

Year Honored
2025

Organization
OpenAI

Region
Global

ChatGPT is fluent in text, audio, and images, capable of taking prompts in one format and generating results in another. Much of this fluency is attributable to Mark Chen, 34, who is now chief research officer at OpenAI.

After joining the company in 2018, Chen led a team that pioneered the techniques many leading AI models now employ to ingest and generate visual data. In particular, he figured out how to adapt the transformer architecture, which researchers had successfully used to generate natural language, to handle images. The pixels that make up an image, it turned out, could be encoded as a series of tokens, similar to words in a sentence.  

“Once you have this representation that treats images like a strange language, then you can use it in the transformer,” says Chen. The team incorporated its method first in ImageGPT, released in 2020, which was followed by the DALL-E series. Now, they’ve deployed it in GPT-5, OpenAI’s flagship model.

Besides his work on images, Chen also spearheaded Codex, OpenAI’s model that generates computer code from prompts. Even though code is written in text, a model that produces it is held to a different standard than other language models—because the code it produces must perform the desired function when executed rather than sound correct. 

Now, Chen leads OpenAI’s effort to make a model capable of more complex reasoning than earlier iterations were. The company’s strategy is to have the model slow down and break a prompt into steps, known as chain of thought, which OpenAI first demonstrated in the release of its o1 model in 2024. Chen aims to soon build models to underpin agents that work autonomously for long periods of time to generate more-nuanced outputs, such as a research plan to carry out a science experiment. 

In his new position, Chen also works on product safety. A safe AI model, he says, is one that does what the user wants, without “going rogue,” such as by sending emails to people without the user’s consent. He will also have to contend with criticism of the company’s models for exhibiting cultural and political bias, and ongoing lawsuits about intellectual property infringement in its training data.