, they discussed trust, regulations, education and more. 

A conversation about the ever-shifting artificial intelligence landscape, including trust, regulations, education and more.

How do you—as company leaders, educators or government employees—build trust in a trustless society?

Natasha Allen | Partner | Foley & Lardner

Trust is built over time. People were mistrustful of the internet. It takes time for people to understand what these technologies can do and what the impacts are. It’s the same thing with people—you don’t automatically trust people, right?

Dr. Cain Elliott | Head Legal Futurist | Filevine

Most people you work with are mimicking garbage they heard from other people, not unlike a large language model (LLM). I’m always shocked by how much trust people place in others. It’s this strange sanctity people place in talking to another human. 

Carlos Kemeny | Co-Founder & CEO | DrumData.ai

Sometimes, the context makes it appropriate to trust humans more than AI, but there are also contexts where we should trust AI more than humans. The key is truth versus error. We should focus on distinguishing and learning what truth is instead of who’s outputting it.

Earl Foote | CEO | Nexus IT Consultants

I’m excited about generative AI. I’m also abundantly cautious. Trust ultimately comes from the way that we decide to leverage the technology. I know the dangers of an unregulated space when it comes to AI. As a human race, we have to think about the extent to which we want to replace humans with AI. Some disruption is healthy, but unfettered growth and access is probably not the right answer.

Br33 Jones | Founder & CEO | GAM3R STUDIOS

How much of the general public genuinely understands what AI is? The issue is our global lack of education and thorough understanding of how our technology works. Most people don’t know what AI is or how it works. If they understood the processes, the algorithms and the logic sequences that go into AI, even at a loose level, we could then choose to engage with it in a different way that would moderate how much trust we’re willing to give it. 

Margaret Busse | Executive Director | Utah Department of Commerce

We think a lot about trust and how to strengthen trust in our economy because people don’t want to participate if trust erodes. If they feel like they’re getting duped by AI, they won’t want to participate. Education is critical; it dispels the mystery. Disclosure is another thing we’re discussing, particularly in areas that might be sensitive. As companies start to deploy AI, they need to act in trustworthy ways; otherwise, people will get very cynical.

Jacob Miller | VP, Data Science | Pattern

Trust is a suitcase word—everyone thinks they know what it is, but everyone’s suitcase looks different. We should break trust in AI into components: one is efficacy. Does it work, or does it not work? Under that component, we’re moving forward, and things are getting better and more trustworthy over time because they work better. Another component is being an honest, moral organization or bot. How do you build that type of trust? It comes down to transparency.

Barclay Burns | Founder | GenerativeImpact.AI

We must create a world in which we maintain some institutional relationship of trust. We need to build institutions so that primary relationships are held sacred. If we break parents and kids apart, if ChatGPT becomes our parent, teacher or doctor, we’ve broken up institutions that have existed for hundreds of years. 

A conversation about the ever-shifting artificial intelligence landscape, including trust, regulations, education and more.

Nate Sanders, Br33 Jones and Carlos Kemeny

Nate Sanders | Co-Founder & CEO | Artifact

The fact that many people don’t know how AI systems work inherently affects our ability to trust AI. The public perceives AI as having the ability to use probability to determine what it will do—it’s the exact opposite. We use probability to determine how AI will achieve the objective it’s been trained to do. Also, when AI is used nefariously, we tend to attach regulations to the technology quickly. Instead, we should use already available systems to regulate people. Rather than trying to stymie or restrict the technology, we should use current systems to build trust between humans.

With AI’s rapid advancement and growth, many jobs and sectors are becoming commoditized. How do you defend this within your current focus? How should workers upskill?

Saul Leal | CEO | OneMeta 

I go back to what the internet has done for humanity. If you look globally, the income per capita around the world has increased because of the internet. It has leveled the playing field, giving opportunities to those who may not have had certain opportunities. AI will do the same.

Richard Vass | VP, Learning Solutions | eLearning Brothers

I don’t know that AI will take over all our jobs, but it will undoubtedly decide who gets the jobs. My business is learning and development. The guys doing our basic building are no longer needed because I could have any number of LLMs generate for me. Those people will need to get a higher skill set. 

Dan Caffee | Founder & CEO | VOZE

Taking a global perspective, AI helps level the playing field. AI isn’t eliminating jobs as a net; it’s enabling large sections of the world to join the global economy. Through AI, we might be able to access workforces, countries and ideas we previously couldn’t. The ability to tap a whole new level of global intelligence is unbelievable.

Cydni Tetro | Co-Founder & President | Women Tech Council

We have asked this question at every industrial revolution. How many jobs will be eliminated, and where do they go? The answer is the same: new opportunities always come. It is just the types of available jobs that will change. That’s what unlocks the next generation of innovation that helps us solve the next world’s problems. 

Dr. Alex Lawrence | Associate Professor | Weber State University

Instead of trying to block or fight it, I will embrace it. It’s my responsibility to prepare students for what they’ll see when they leave school, and they’ll be using it like crazy. We are currently in a moment in history where students can immediately jump to the top of any company when it comes to understanding AI.

Natasha Allen | Partner | Foley & Lardner

In the legal profession, everyone is worried we will get taken over. We can’t hide from it any longer; it is part of our profession. Our clients are going to require us to become more efficient. I don’t think we’ll lose our jobs; we’ll become more effective at our jobs.

Br33 Jones | Founder & CEO | GAM3R STUDIOS

Jeffrey Katzenberg recently went on record to state that he believes 90 percent of jobs within the entertainment industry will be completely eradicated because of AI. I condemn him for fanning the flames of fear in this case. We’re not here to discuss whether AI is going to happen or not; it is. We can either be afraid or embrace the change.

Carlos Kemeny | Co-Founder & CEO | DrumData.ai

As a CEO, I have a fiduciary responsibility to displace jobs that can be automated because my shareholders require me to operate more profitably. Generation Z gives us an opportunity to start shifting the boards to different value sets. We need to optimize for society as well. Education is very slow to change, and most of the public schools around me are fighting the change AI brings. If we don’t act now and incentivize education and businesses differently, I worry.

Jacob Miller | VP, Data Science | Pattern

We hire phenomenal writers. Then, we build AI and machine learning technology to multiply them by three, five and ten. We’re looking for experts who have put in the hours, time and repetitions through their educational and professional careers to become excellent at what they do. If they’re not great, they are harder to work with.

A conversation about the ever-shifting artificial intelligence landscape, including trust, regulations, education and more.

Sunny Washington and Earl Foote

How does education keep up with something like this? What should we be doing as an industry to help out?

Dr. Alex Lawrence | Associate Professor | Weber State University

Academics and education are slow-moving and adverse to change, but I understand why AI scares them. There is a disincentive to take risks in education. There are lawsuits all the time in Utah. There are reasons not to embrace this stuff and not do it. At some point, we have to stop talking about the options and start safely testing educational approaches to AI. 

Edson Barton | Co-Founder & CEO | YouScience

We need to consider changing education at an earlier age. At the university level, it’s too late. We must look at how we want to construct society and have the education system support that. If we don’t, the divide between the haves and the have-nots will get bigger.

Dr. Cain Elliott | Head Legal Futurist | Filevine

The best technologists I hire are usually people with backgrounds in the humanities. We need to focus on teaching people how to learn; they’ll be fine with this and other technology. If you focus on preparing students for certain career paths, you’re doing them a serious disservice.

Earl Foote | Founder & CEO | Nexus IT Consultants

Can we expect educators to educate on AI if they don’t understand it? If we want to create a shift there, we have to start thinking about educating educators first and making it safe. As business leaders, the most significant thing we can do is tell learning institutions what we need in the talent pipeline and create the demand for it. The people in decision-making positions will have to start thinking about how to meet industry demands as they apply to this technology.

Cydni Tetro | Co-Founder & President | Women Tech Council

At the high school level, my kids have experienced lots of teachers injecting innovation into the ecosystem. We have a long way to go, but teachers are serving mass demographics, not just one student—we need an educated society across the board. We need education to be fundamental to how we solve world problems.

Dan Caffee | Founder & CEO | VOZE

In our current education model, luck and circumstance have a huge impact—not all schools are equal. How do you level the playing field? The internet and AI democratize access to the very, very best. A kid in rural Indonesia might now have access to the same education and career opportunities as somebody born in the right place. The world is better for it. It’s kind of scary, but it’s mostly scary for people I’m less worried about.

What role do government policymakers play in creating a regulatory framework that fosters innovation while addressing potential risks and ethical considerations in AI development and deployment? 

Nate Sanders | Co-Founder & CEO | Artifact

Bill Gurley recently gave a talk and gave half a dozen examples of how there is almost zero evidence of regulation leading to more innovation in technology—it’s always done the opposite. If poor regulation continues, we are essentially at the grace and mercy of companies like Meta because they have the capital to wade through the space.

Edson Barton | Co-Founder & CEO | YouScience

If we’re going to create a system that benefits everybody, we have to look at what we’re trying to achieve and then build regulations toward that. If we’re reactive, we’re going to get the same results, which are always crappy.

Saul Leal | CEO | OneMeta 

In other areas, we have identified that an intelligence has a jurisdiction and money earned through a person’s efforts belongs to that person. That has not been defined with AI. Who owns the successes and failures? Not being reactive is the biggest challenge.

Barclay Burns | Founder | GenerativeImpact.AI

We’re working with health care systems on using synthetic data to build data sets around patient outcomes and costs. There are a lot of insights you can derive when you turn things into synthetic data. Many privacy issues currently swimming around HIPAA aren’t as pronounced when you do that.

Margaret Busse | Executive Director | Utah Department of Commerce

The reactive versus proactive framework is interesting because you don’t want to be so proactive that you’re regulating things before you know what will happen. To some degree, regulation has to be reactive. Lawmakers often don’t understand everything. We should build something that allows us to observe and learn. We need computer scientists to get into government and help educate.