Who is technology for? John Nelson’s push to align innovation with public values

By Colin Bowyer on April 6, 2026

Raised to believe in service and responsibility, Assistant Professor John Nelson now examines why some technologies advance while others stall and how society can ensure innovation genuinely benefits the public

Image
a headshot of a man

John Nelson

By Taylor Pedersen, CLA Student Writer - April 8, 2026

When John Nelson arrived at Oregon State University, he brought with him a question that has followed him since adolescence: How can science genuinely serve the public good?

Now an assistant professor in the School of Public Policy, Nelson studies the societal dimensions of emerging technologies, from genome editing to artificial intelligence, and the policies that shape how those technologies improve people’s lives.

He grew up in Eau Claire, Wisconsin, the son of two public school teachers. “I always liked books,” he said simply. “I always felt most comfortable in the classroom: reading and writing, and researching.”

As a teenager, Nelson was described as “academically inclined.” With a perfect score on the ACT, he decided to attend Arizona State University for academics, while also walking on the cross country team.

In high school, amid the political tenor of the Bush and early Obama years, climate change filtered into his consciousness. There wasn’t a single moment. Instead, there was a growing sense of obligation.

“I always felt like I had to do something to help people,” he said. “And I felt like I had a responsibility to do something big.”

At the time, that meant nuclear physics. Clean energy felt like a tangible way to “serve humanity as a whole.” If you were good at school, he reasoned, you became a researcher. You solved a problem.

He began at ASU in physics. But because he entered with a stockpile of high school credits, he had space to explore. During his sophomore year, he took a semester to step outside his major and enrolled in a graduate-level course called Science, Technology, and Public Affairs.

The shift was immediate. 

That fall, he found himself enrolled in quantum physics and classical mechanics, and, alongside them, a graduate seminar called Uncertainty and Decision Making, focused on how human institutions make choices under the inescapable uncertainty of the world. 

“The contrast between how happy I was in that class and how I felt in my physics classes,” he said, trailing off. “That told me something.” 

One day, midway through a classical mechanics lecture, he stood up and walked out. He never went back.

The lab would not be his area of focus. Policy would.

Now, Nelson’s research asks a deceptively simple question: Why do some technologies advance rapidly while others stagnate?

The common assumption, he explained, is that progress follows money and desire. If society wants something badly enough and funds it generously enough, innovation will follow. But the story is more complicated.

Some fields are primed for breakthroughs; others face structural scientific constraints. And beyond the lab bench lies another decisive factor: public trust. 

In focus groups he conducted on human genome editing, participants expressed fascination and distance. The technology seemed impressive, but irrelevant. 

“I can’t afford healthcare,” one participant told him. “So what does this have to do with me?”

For Nelson, that comment captured a very important truth. A technology cannot be considered progress if it fails to improve most people’s lives. Public opinion, he argues, should not merely react to innovation; it should help define what counts as progress in the first place.

His first book, co-authored with longtime mentor Barry Bozeman,  was released last summer and advances that argument. Titled  Advanced Introduction to Innovation and Public Values, it challenges the notion that profitability is the primary yardstick for technological success. Many innovations make money, he noted, without advancing broadly shared societal goals, such as prosperity, safety, civil liberties, and public health.

Some innovations actively undermine them. 

In the book, Nelson introduces the concept of “public value innovations”: new ideas that can advance public priorities; and “anti-public-value innovations,” which can harm public priorities. Public value innovations include antibiotics and the polio vaccine, while more obvious anti-public value innovations may  include Ponzi schemes and the mass marketing of fentanyl. The existence of such cases, he argued, should force society to rethink how technologies are evaluated and governed.

That concern intensified during his postdoctoral work at the Georgia Institute of Technology, where he was embedded in a $65 million AI manufacturing initiative. There, working closely with engineers, he saw what artificial intelligence looks like on the ground; not as an abstraction, but as infrastructure.

AI systems require vast data centers, immense energy, and rare earth minerals. They are designed by people and deployed by institutions. And yet, Nelson argued, the governance structures shaping them fall behind their societal impact.

“We decided long ago that the immense power of government ought to be guided by the will of the people,” he said. “But we haven’t said that kind of thing about technology.”  

Right now, he observed, the bar for technology governance is low. But AI’s visibility offers an opportunity.

Governments can prohibit certain harms, such as nonconsensual intimate imagery generated by AI, and experiment with market structures that reward companies for serving public needs rather than exploiting vulnerabilities. Firms, too, could incorporate stakeholder governance models that give workers and communities a voice in decision-making.

“Technologies come from somewhere,” he said. “They’re made by people.” 

At the School of Public Policy, Nelson described himself as occupying a “niche”; an interdisciplinary space that sits on the fringes of political science, sociology, engineering, and applied ethics. Few departments are built precisely for scholars who examine the societal dimensions of emerging technologies.

When he saw the job posting at OSU, focused on science, technology, and AI policy, he forwarded it to his mentor. “This looks like you,” the mentor replied. 

The fit felt exact. The university’s growing investment in AI research provides fertile ground for collaboration with computer scientists and engineers, while the policy school offers a home for the normative questions that drive his work.

As a teenager, Nelson wanted to solve climate change by building better reactors. As a scholar, he still wants to solve big problems. But now he believes the urgent challenge is not in inventing the next breakthrough, but in deciding what breakthroughs are for.