CAIOs must understand business policy and strategy in addition to healthcare and IT

Editor’s Note: This is the second part of a two-part interview. To read the first part, click here.

Dennis Chornenky, chief AI advisor at UC Davis Health, knows what it takes to be a chief AI officer in healthcare—he’s been one twice.

That’s why we sat down with him for this two-part interview — to share the lessons he’s learned about this new C-suite role in healthcare.

Today Chornenky, who has two decades of IT leadership experience and also serves as CEO of Domelabs AI, discusses where and how UC Davis Health is making the most of AI.

He describes some of the many artificial intelligence projects he’s working on in California’s health system — and offers advice for other executives who may seek to become the head of AI for a hospital or health system.

Question. Please talk at a high level about where and how UC Davis Health is using artificial intelligence?

A. I am fortunate to have the opportunity to work with UC Davis Health and the great leadership there. I think it has a great vision, very innovative, great clinicians and staff, just a great team all around.

We’re tracking more than 80 applications of AI across the health system, and it’s quite a diverse range. A lot of that also comes from individual research grants from the NIH and others that some of our researchers and clinicians are engaged in, some really interesting applications.

And it’s a variety of applications in care delivery, patient engagement, patient management, operations, administration. We’ve been looking a lot more on that side lately. From the administrative side. We recently held a UC-wide conference at UCLA focused on how we can think about using AI more on the administrative side of all the different UC campuses and academic medical centers across UC .

I don’t really want to get into any particular vendor, but it’s been great to see a fairly rapid adoption of AI. I think it still has a long way to go.

There are so many skills. As I mentioned in the first part of our interview, AI is evolving very quickly. A big part of the role now is thinking about how we position ourselves for the things that are going to be really important, really powerful, in just the next year or two.

Sam Altman, CEO of OpenAI, which makes ChatGPT, recently said that it thinks we may have something AGI-like or AGI-like [artificial general intelligence] within a thousand days. So I think to the extent that something can mimic those capabilities, whether we want to think of it as AGI or not, it’s going to be very powerful. [Editor’s note: AGI is software with intelligence similar to that of a human being and the ability to self-teach.]

The recognition is much more powerful than what we have, even in the most advanced models that have been released so far. So how organizations think about positioning for this is a really important dimension, both from the governance side and the adoption side.

Q. More specifically, please describe and discuss just one particular AI project you are proud of that is working well for UC Davis Health and some results you are seeing. How did you oversee this project?

A. I do not oversee AI projects individually. I’m a few steps outside of that, looking at more levels of strategic governance, providing assurance and broader direction for innovation and adoption. But we certainly, as I mentioned, follow different projects and encourage them and help support them with different resources in different ways.

One that I can mention that is really good is the adoption of a technology that we have used to help us identify stroke patients and prioritize strokes. This has been really helpful. The vendor we’re working with also helps to share some of that information across other academic medical centers and health systems, creating more efficiency and better patient journeys for patients who may also have a presence in several other organizations. .

And it’s really improving patient outcomes in the space. The ability to identify stroke sooner makes a big difference in what a patient’s outcome is. So this is a project we are very proud of.

Question. What are some tips you would offer to other IT leaders looking to become the head of AI for a hospital or health system?

A. This is a really interesting question, and I understand a lot of colleagues and people who have seen my journey and are interested in doing something similar. A lot of people have really great backgrounds, and so they’re thinking about how to potentially advance in that space. I’ll say again, at a high level, it’s what I mentioned in the first part, I think you have to really think about those different dimensions of the skill sets that will be required to be successful in this role in the future .

So understanding policy, business strategy, technology, what it can and can’t do, and having domain expertise for whatever domain you’re getting into. If you feel like you have some of those, but maybe you’re a little lacking in some of the other areas, I would definitely encourage people to delve into those other areas and expand their skills in general.

Because, again, AI is a multidimensional technology, and multidimensional capability requires, I think, multidimensional leadership. And it’s evolving so quickly and governance is evolving, even though it is governance is far behind AI. It is very complex.

And this is what I call the AI ​​governance gap – where you have technologies that are evolving much faster than governance can catch up.

And you have very limited in-house expertise, especially in regulated sectors like healthcare and government. It becomes really challenging for those organizations to adopt AI quickly when it comes out, especially if they don’t have guardrails in place. So we’ve seen a lot of memos across academic medical centers and other organizations that have come out over the last year saying, please don’t use ChatGPT until there’s a clear policy laid out where you can use it.

Now, some people go ahead and use it anyway. It is not something that organizations can always control. Of course it’s best to implement those policies ahead of time to understand what types of applications and activities, potential risk impacts or threat vectors, you’re likely to see.

I think cyber security is probably another one I should mention for people interested in this role. Cyber ​​security is becoming really critical, especially in healthcare. Many threat actors see healthcare as somewhat of a soft target that is data-rich with highly valuable data that can be used in many different branching applications, anything from ransomware to exploiting the threat capabilities of internal with additional data.

So I think understanding the intersection of AI and cybersecurity is also very important.

I recommend that people just educate themselves on these different dimensions, try to develop as many skills as you can, as much understanding as you can in those areas, read the news, do your best to keep up and partner with good people.

It is difficult for anyone to be a deep expert in any of these areas. So it’s really good to partner and have good communities where there’s peer-to-peer collaboration between leaders, so if you end up in an AI leadership role in your organization, you have the skill sets and the broad perspective needed to help organizations bridge that AI governance gap.

For valuable BONUS content not found in this article, click here to watch the HIMSS TV video of this interview.

Follow Bill’s HIT coverage on LinkedIn: Bill Siwicki
Email them to: bsiwicki@himss.org
Healthcare IT News is a publication of HIMSS Media.

Leave a Comment