Skip to main content
Apr 30, 2025

Carey AI Summit: Decoding the future

Carey’s AI Club hosted the AI Summit for industry leaders to come together and explore the future of artificial intelligence and its impact across disciplines.

Can artificial intelligence change the world? If you attended the 2025 AI Summit at Johns Hopkins Carey Business School, you’d probably say it already has. 

From conversations about responsible AI to real-world applications at large corporations like Microsoft, the AI Summit brought together industry leaders, innovators, and enthusiasts to explore advancements, challenges, and the future of AI.

Hosted by Carey’s AI Club, this year’s AI Summit themed Decoding the Future explored the evolving role of artificial intelligence across industries. Through three expert panels, attendees heard how AI is transforming industries and discussed strategies to ensure its responsible development and application.

“This year’s theme was inspired by the rapid integration of AI across industries and the growing need to balance technological advancement with ethical responsibility,” said current full-time MBA student and president of Carey’s AI Club Jay Patel. “We wanted to highlight not just how AI is transforming sectors like health care, marketing, cloud, and startups, but also emphasize human-centered design and responsible AI practices. The theme reflects the dynamic intersection of innovation, business, and ethics–core pillars that resonate with Carey’s mission.”

Panel 1: AI-Driven Innovation – The Future of Technology and Marketing
The first panel of the day explored how artificial intelligence can be a powerful force for progress if built with intention, inclusivity, and humanity in mind. Panelists included Silvia Badilla Arroyo, customer success manager at Salesforce; Marsha Fils, principles pioneer at Google; Rob Spalding, CEO of SEMPRE.ai; Arnaud Jaspart, CTO of Enquire Al; and Claudia Sánchez Alegria, business development manager at Alphabrands.

Punita Verma, a member of Carey’s AI Club and current artificial intelligence student at Johns Hopkins Whiting School of Engineering, shared how her passion for ethical AI was sparked by both her coursework and real-world experiences. Through her studies, Verma gained perspective on how AI can be used not just to drive profits, but to address broader global and societal challenges. Her nonprofit work has allowed her to explore how AI tools can serve communities rather than disrupt them, especially in labor-driven economies where technological shifts may destabilize livelihoods.

“As a student, I’ve been able to explore AI not just from a technical standpoint, but also through the lens of leadership, ethics, and systems thinking,” said Verma. “One of my favorite courses covered everything from model development to deployment challenges, regulatory risk, and ethical frameworks. The summit brought together voices across industries to address the very questions we’re tackling in class: How do you scale AI responsibly? When does context matter? What does leadership in AI really look like?”

Other panelists echoed the same themes of responsible and human-centered AI. The discussions made it clear that AI should complement human efforts, not replace them. The speakers emphasized the need to involve diverse communities in governance, test tools in real-world scenarios, and consider cultural and ethical factors in every decision.

The conversations also addressed common misconceptions, like the concern that AI will replace human jobs. Panelists reminded attendees that artificial intelligence is a tool–not a substitute for human behaviors, which AI cannot replicate. Sanchez Alegria mentioned it’s important to remind ourselves that behind every data point is a real person. Building effective AI tools means grounding them in human values, being transparent about their use, and continuously listening to and learning from those they impact.

Panel 2: Human-Centered AI: Building Responsible and Trustworthy Systems
At the second panel, speakers explored the technical, ethical, and societal sides of developing AI. Panelists included Wole Moses, chief Al officer at Microsoft; Brian Carlson, CEO of Storytime Al; Thomas Krendl Gilbert, CEO of Hortus Al; Michael Digafe, principal of financial services at AWS; and Tim Kulp, CIO of Mind over Machines.

A key theme was the importance of perception and how people understand and interact with AI tools. Many users treat AI tools like advanced search engines–think Google and Bing–but there’s a growing need to better communicate their intended use, limitations, and benefits.

Moses discussed how Microsoft branded its “CoPilot” tool to emphasize collaboration, making it clear that the tool is meant to assist rather than replace the human element. He explained that the development process involves conversations about the ethical and societal challenges of using AI. Those types of conversations included identifying the right use cases for AI, determining when human involvement is necessary, and ensuring that people can trust and feel comfortable using the tool.

“Microsoft is building AI responsibly through a comprehensive strategy grounded in six principles: fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability,” said Moses. “These principles are supported by a strong governance framework including the Responsible AI Council, Office of Responsible AI, and Aether Committee, and practical tools like the Responsible AI Standard, impact assessments, and dashboards. With efforts like the annual Responsible AI Transparency Report and alignment with frameworks such as the NIST AI Risk Management Framework, Microsoft ensures AI is ethical, transparent, and aligned with societal values.”

Panelists discussed the barriers to building trustworthy AI and the lack of user agency. Especially in high-stakes contexts like government services and health care, companies often define use cases without input from the communities the tools serve. Without thoughtful integration, AI could lead to overreliance on tools users don’t fully understand.

Despite those concerns, the panelists remained optimistic about AI’s potential to address major challenges. Carlson shared how AI has helped teach literacy in nearly 200 languages. But the most successful integrations will require including frontline workers in the design process. When employees are more involved in creating the tools they are using, they will adopt the systems more readily and effectively. 

When discussing AI’s broader societal impact, the panel agreed that AI can both diminish and enhance critical thinking, depending on how it's used. Like calculators in math education, AI could unlock higher-level learning if implemented thoughtfully. Kulp also emphasized that AI can ignite people’s passions, enabling them to focus on purposeful work. 

Panel 3: AI-Powered Transformation: Health Care, Cloud, and Startup Innovation
The final panel opened with lighthearted stories about the most ridiculous ways the speakers have used AI–from turning angry emails into polite customer responses to writing love letters–showing just how integrated AI has become in both personal and professional spaces.

Panelists included Jacob Artz, engagement delivery senior manager at Salesforce; Cybil Roehrenbeck, executive director of AI at Healthcare Coalition, Nick Culbertson, managing director at TechStars; and Andrew Bittan, dynamic global partner success manager at ServiceNow.

Throughout the session, ethics, equity, and human oversight were recurring themes. The panel agreed that while AI is evolving, human intuition remains irreplaceable. They emphasized the importance of balancing innovation with accountability and patient-centered care. 

The conversation highlighted how AI adoption has accelerated post-COVID, particularly due to financial pressures on the health care systems. While AI is getting more attention, challenges like data privacy, regulatory compliance, and a lack of trust in the sensitivities of health care data still exist. And trust remains essential for successful AI adoption in health care.

When discussing low-code/no-code tools, Artz explained that AI reduces time constraints when building solutions. What would historically take days or weeks can now be done within hours or minutes.

Addressing startup challenges, the panelists said that AI companies must focus less on technology and more on delivering value and ROI. Building trust with hospitals requires both innovation and real operational and financial benefits. 

The summit as a whole showed the pace at which AI is reshaping industries. While AI can offer unparalleled opportunities for innovation, efficiency, and growth, all panels agreed that AI also needs to be committed to ethics, governance, and human insight. These conversations left attendees feeling energized, thoughtful, and more equipped to navigate the evolving landscape of AI.

“Events like this provide students with exposure to cutting-edge industry insights, networking opportunities with leaders, and a deeper understanding of how classroom concepts apply in real-world scenarios. They spark curiosity, inspire innovation, and encourage students to think critically about the future they will help shape. It’s also a platform to engage with emerging trends and explore career paths in AI-driven industries,” said Patel. 

Upcoming Carey application deadlines

Please visit our upcoming deadlines webpage to view all application, decision, and deposit deadlines.