
It’s common knowledge now that AI is rewriting the rules of how we work, communicate and innovate. But as generative AI becomes more powerful, it also raises critical ethical concerns.
Bias, privacy and transparency are challenges that companies must tackle to ensure AI serves humanity rather than undermines it.
At a recent panel discussion on ethical AI at Generative AI Expo 2025, part of the #TECHSUPERSHOW, industry experts weighed in on what responsible AI should look like and the challenges in making it a reality.
The session featured Ajay Dankar, co-founder and CEO, Trussed AI; David Hartman, founder and CEO, The SilverLogic; Venk Jayaraman, chief technology officer, ModMed; Bhavesh Patel, vice president of business and consumer experience applications and IT operations, Memorial Healthcare System; and moderator Izzy Sobkowski, founder, Ask-RAI.
For Dankar, ethical AI boils down to responsibility.
"Ethical AI is about being responsible," said Dankar.
Jayaraman took it further, calling ethical AI "the bedrock of our system." In healthcare, trust between doctors, patients, and technology is paramount.
Patel connected ethics to culture. "How do we extend it to any digital platform? That is the main question for us.”
But the discussion quickly turned to a deeper, more uncomfortable truth: ethics don’t always mean the same thing in different contexts. Each country has its own definition of ethics. An AI can follow expected ethics, but the outcome isn’t always what you’d expect.
And one of the biggest ethical concerns is bias. AI models learn from data, and if that data is skewed, so are the results.
"Trust doesn’t come without transparency," Jayaraman said. "There are many different kinds of bias. In healthcare, what's most important is that it doesn’t favor one demographic over another."
Dankar introduced another layer to the discussion.
"Bias, in terms of ethics, depends on the context,” said Dankar. “Who are you being biased for? It has different subdimensions. We need to think about bias based on different use cases."
Hartman took a more pragmatic approach.
“If there's more sample data on one thing over another, the model will lean toward that thing,” said Hartman. “AI isn’t alone in making bad decisions. Humans do it all the time."

Regulatory frameworks are often seen as a barrier to innovation, but when it comes to AI, they may be necessary guardrails.
“We need clear policies," Dankar said. "Regulations aren't new. The challenge is figuring out how to adapt existing frameworks for generative AI rather than reinvent the wheel."
There is no one-size-fits-all approach to ethical AI. Different industries have different concerns, and global perspectives on ethics vary widely. What remains constant, however, is the need for transparency, accountability and a commitment to minimizing harm.
Edited by
Greg Tavarez