
Traditionally, when someone heard the word “copilot,” they thought of someone who is part of a duo flying a plane or jet – the ones who make sure all passengers on their flight make it to their destination safely.
Then, Microsoft came in with Copilot and took over what pops into the mind of many when the word “copilot” is used. Now we are at a point to Copilot is generic in AI.
A panel session at Generative AI Expo 2025, part of the #TECHSUPERSHOW, explored the role of generative AI as a copilot in various enterprise functions, from customer service to knowledge management.
The panel featured moderator Andy Abramson, CEO and founder, Comunicano Inc.; Chad Simpson, CIO, CITY Furniture; Izzy Sobkowski, Founder, Ask-RAI; and Pavlo Yalovol, VP of Innovation, Intetics.
Abramson asked the panel to describe what Copilot means to them and their companies. We got answers like “a piece of intelligent code that can work with a person” from Sobkowski, and that “it is extraordinary for everyone to have access to technology like Copilot and agentic AI because it benefits all.
Yalovol went a bit more in-depth and said, “Copilot should be built in by design. You need to understand how it was built. At each stage there needs to be complete transparency for all levels.”
Copilots promise greater efficiency and accuracy. But the question was then asked, if mistakes are made, who takes responsibility? Can Copilots be accountable?
“Right now, the user has to be responsible for mistakes made by the AI,” said Simpson. “AI is a tool; therefore, the human should take responsibility for the tool for the foreseeable future.”
Another question was asked, building on that. How do we build better for the future and hold AI accountable? AI is in its infant stage. How do we teach it to be accountable?
“GenAI is fundamentally flawed,” said Sobkowski. “These technologies don’t have the ability to explain themselves. The only way to teach them to do that is through testing.”
If dream of AI is not really automation, but augmentation, like giving people smarter tools so they can work smarter and not be replaced, how can enterprise replace the workforce training so employees can be better for AI?
“I don’t think all businesses would agree that the dream of AI is not really automation,” said Simpson. “A burger joint in California is fully automated. The machine cooks fries and burgers and serves you. No humans are seen. That said, for us, the goal is to take away the tedious tasks of employees so they can focus more on innovation. In terms of training, we are training them similarly like we do with cybersecurity. Different tiers of core tools that will make their lives easier. If we can train our organization, we will get to the point where great ideas are created.”

Ethics is a big question around AI and Copilot. How should enterprises establish ethical guidelines for AI and Copilot that ensures fairness and transparency?
“It is still being solved. We want AI to be perfect, and we are never going to get there,” said Simpson. “Our teams are way far from perfect. This creates a chasm.”
Near the end of the panel discussion, Abramson delivered a strong message in regards to ethics and profit
“You should never throw out ethics for profit. It is not ethical to kill off your competition just for profit,” said Abramson. “If that is a world businesses go to, then we are in the wrong world.”
Looking around the room, I saw the majority, if not all, nodding in agreement. Clearly that message was rang true to their ethical beliefs.
Edited by
Greg Tavarez