Data-Smart City Pod

How to Responsibly Navigate Government's AI Frontier with Luis Videgaray

Episode Summary

Professor Stephen Goldsmith interviews Luis Videgaray, director of the MIT AI Policy for the World Project and former Foreign Minister and Finance Minister of Mexico. In this episode they explore the dynamic landscape of AI usage in cities, from procuring tools responsibly to leveling up adoption.

Episode Notes

In this episode host Professor Stephen Goldsmith interviews  Luis Videgaray, director of MIT AI Policy for the World Project, senior lecturer at the MIT Sloan School of Management, and former Foreign Minister and Finance Minister of Mexico. In this captivating discussion they explore the dynamic landscape of AI adoption in cities, from basic applications to transformative processes, why generative tools demand leadership attention, and the best approach to AI procurement. They also propose novel ideas around the role of AI in a bureaucratic organization. 

Music credit: Summer-Man by Ketsa

About Data-Smart City Solutions

Data-Smart City Solutions, housed at the Bloomberg Center for Cities at Harvard University, is working to catalyze the adoption of data projects on the local government level by serving as a central resource for cities interested in this emerging field. We highlight best practices, top innovators, and promising case studies while also connecting leading industry, academic, and government officials. Our research focus is the intersection of government and data, ranging from open data and predictive analytics to civic engagement technology. We seek to promote the combination of integrated, cross-agency data with community data to better discover and preemptively address civic problems. To learn more visit us online and follow us on Twitter

Episode Transcription

Betsy Gardner:

This is Betsy Gardner, editor of Data-Smart City Solutions at the Bloomberg Center for Cities at Harvard University. And you're listening to Data-Smart City Pod, where we bring on the top innovators and experts to discuss the future of cities and how to become data smart.

Stephen Goldsmith:

This is Steven Goldsmith, professor of Urban Policy at the Harvard Kennedy School. Welcome back to another one of our podcasts. We are delighted today to have Luis Videgaray as our guest. We met Luis when he spoke to our Chief Data Officers Group that we manage at the Bloomberg Center for Cities. He is a senior lecturer at the MIT Sloan School of Management, Director of the MIT AI for the World Project. And prior to MIT, he was Foreign Minister and Finance Minister of Mexico. A very distinguished guest who understands well the challenges and applications of applying AI in the city environment. We welcome and thank you for coming back to talk to us today.

Luis Videgaray:

Thank you, Steve. I'm delighted to be here.

Stephen Goldsmith:

In addition to the introduction where I mentioned all the things that you've done, tell us a little bit more about your work and yourself, particularly as it relates to AI policy for the World Project.

Luis Videgaray:

Thank you, Steve. I came to MIT in 2019 after my tenure in government in the Mexican government and since then I've been working at the intersection of public policy and AI. So of course this year has been really interesting because of all the extraordinary developments that are very public about AI. But it's been a while now. We are now into the fifth year working at MIT, and this is a time also where the Schwartzman College of Computing was created and we very much work around the area of policy, public policy and artificial intelligence.

It's an enormous field that has become extremely relevant, but I think that lately there's a sense of urgency that was not there before. Policy about AI is about many things, includes rulemaking, includes government, procurement of AI, includes the promotion of AI adoption in a country or in a city or in a state. So there are many concerns, there are many objectives. There's not just one thing to worry about AI or one objective to pursue about AI, it's many things. So it's an interesting area and we try to do it leveraging how good MIT is on the technical side of things. Our focus is perhaps a little bit different from what traditional think tanks and policy spaces would do because we have the enormous advantage of working very closely together with amazing data scientists and artificial intelligence scientists at MIT.

Stephen Goldsmith:

I thought it was particularly interesting when you talk to our chief data officers, the way you help them think about the various aspects of AI, almost an ecosystem of levels of AI that would be applicable in cities. Could you talk to our audience a little bit about that kind of typology and how you were thinking about it?

Luis Videgaray:

I think there's definitely an avalanche of AI opportunities and offers that large organizations including cities are facing. And it's a time of opportunity, but it's time a of challenge. So I always think that given the current state of AI with the preeminence of generative AI, as of late, we see right now three different types, let me call them level one, level two and level three for deployment of AI in an organization, a complex organization, large bureaucracy that could be like city government.

So, level one is the most straightforward when you're taking tools, applications that are already there, think of ChatGPT and people, perhaps without any guidance for the organization because these tools are free and available, just starts using them for doing their work. So, that's something that is happening already very actively and there, the challenge for the organization, is more about setting rules and how to make sure that everything is aligned to the benefit of the organization, that there is no cyber risk, their privacy is protected and that everybody is walking the same direction and making the most of the tools.

But that's a very simple deployment and that's not where the opportunity lies. The real opportunity comes from when organizations start to embrace more complex, more ambitious uses of AI. So, that we go to level two.

Level two, we can call it a thin customization. So there is already some use of the organization's data, but it's done through relatively little resources. It's quick to do, it's not very expensive. And there are two functions that are readily available and a lot of people are being bombarded with offers and ideas about doing this. One would be chatbots and everything that relates to customer service or citizen service. And there there's already a whole universe of things that can be done there. And the other one will be querying your own databases, using the particularly large language models and using their power to understand language, not necessarily to create but understand questions from human language, either from people in the outside of the organization or people in the inside of the organization and query your own databases.

So, this is not about using the generative texts, for example, or images, but actually just making it easier to access the whole wealth of knowledge that the organization has. Let's call it the knowledge graph that is in the organization and finding a way to use it more effectively. That can be an extremely powerful tool, but still it's not the deepest of adoptions.

So, then we think about level three. By the way, level two, there are already many commercial suppliers starting with the large companies, tech companies, but also many startups, many companies that were doing other things in the data and customer service space that are migrating to AI offerings. So there are many options there, and we can talk about it in a second of what to do when you're bombarded with many options. But these are still, if you think about it, it's relatively shallow uses of AI. They're not very transformative. They're extremely helpful, but they can be extremely helpful. But these are not the most important, the most promising use.

The most promising use is when you actually change the processes of how you do things and you start embedding AI to empower people who make decisions throughout the whole process. And this is more complex to deploy because you need a true customization. You need to really, really make the organization work a little bit differently. So these are deployments that we don't see yet, but you can definitely think of processes facing citizens or internal processes where things are done faster, more efficiently with more accuracy to provide services to the public and to deliver public goods. So those, I think, that those will come in time, but those are much more challenging because it takes a lot of change and a lot of buy-in from leadership and from the technical teams.

Stephen Goldsmith:

That was spectacular. In fact, that was so good that I'm going to confess while people are listening to us that I'm trying to write a paper on some combination of two and three. So I'm going to pretend this is a podcast that I'm actually interviewing you for my paper at the same time. So my theory that I've been working on, Luis, is that this is a pretty major change in how bureaucracy should work. Think about Mexico or the US or the more advanced civil service countries in particular.

So if you took two, which is query your own database plus three, which is change your processes, I'm trying to think through that that should mean that we could give field workers, the lower level workers in a city or country, more discretion, but we could also manage their discretion better through the tools we have to kind of evaluate how they're making decisions. So what do you think of generally about this, my theory that this could be the positive destruction of bureaucracy but still accomplish a lot of the accountability that we have with bureaucracy? Does that question make sense at least?

Luis Videgaray:

I think that's a great way to approach it, Steve. I like your approach. And it's always, I think, important to understand that bureaucracies are the way they are for a reason, which is to essentially do the same things over and over. When bureaucracies are very good to operate when there's a process that is well established, that means that bureaucracies also are not very good to embrace change and when the processes change. So, for all these promising things to happen, level two and particularly level three, you need a lot of leadership and you need buy-in. If it's a city, you need buy-in from mayor, city council, city manager, depending on the organization, you need that. This is not something that will happen only by the technical people, the data experts, the AI experts. This is not like adopting software, just another tool of software and let's see where we go with that. This needs to be approached in order to unleash the true potential as a change in the organization, that has to come from leadership not as a technical problem.

Stephen Goldsmith:

I want to continue with this issue maybe over the next month, not the next 10 minutes. So trying to think through how is best to do it. And I agree that the chatbot world is off to a faster start than the two more sophisticated ones. But Luis, and you referenced this, how would a city official, call them the CIO or CTO or Chief Data Officer, how are they going to hold ChatGPT/AI AI accountable when a solution may be a mixture of technologies, a mixture of public and private? You mentioned to the chief data officers the difficulty of acquiring AI. I think you used the phrase supply chain, which is a really interesting phrase for this. What recommendations do you have to chief data officers about how to watch out for the challenges, privacy challenges, algorithmic bias challenges, all the other things embedded in these systems?

Luis Videgaray:

I think anybody dealing with AI needs to understand who's producing AI, who's delivering AI, and it is changing very rapidly. So, we typically used to think of any AI deployment as pretty much the need to build a model from scratch. So it's an organization, have data, and then you train a neural network with that data and make sure it works, that it's not harmful biases and that the data's protected and the results are explainable, things like that. And that was pretty much the approach. But right now, that's not how AI is being produced to the world and delivered to the world. We have one of the blog posts that published about three or four months ago. We call it AI Supply Chains and Why They Matter, because the creation of AI is now a layered process. And again, I'm talking about the supply chain of AI itself, not the application of AI into supply chains, which is a different topic.

So, you can think first of all, now most AI deployments are using a base or a foundation for very large AI system. When I say very large, I mean really, really large. These are the large language models or even more complex multi-model models that are starting to surface. These models can only be built by a few organizations because of their scale and how expensive they are to build. So think Google, OpenAI, Anthropic is now joining, Meta, of course. A lot of what we are seeing, if you go back to the chatbot applications or querying your data that is relying on very large models that are not going to be produced by the people that you're talking to. Suppliers that would approach bureaucracies, there are a few exceptions, but most likely going to be themselves customers of a large AI system themselves.

And then the question is, okay, that is challenging because AI is a little bit different than regular software in the sense that when you make a complex AI supply chain, there's some behaviors that cannot be guaranteed. So the system, it can be poorly specified. There could be data correlations that people don't understand when you're mixing two models in order to supply something. So there's a relevant issue, for example, about liability allocation. And right now, if you are an AI company, an AI startup that you're doing customizations and things that are special for cities, that could be very helpful. But you are going to be subject to the terms and conditions of your base model supplier. Maybe that's Anthropic or that's OpenAI, or maybe you're doing it through Microsoft. But your supplier is going to have limited ability to actually go into the source of the main piece of the AI that you're using. So that creates some challenges that are not the usual challenges that we see because we typically think of supply chains where you have very well specified features and behaviors and where you can specify liability correctly.

This is something that is not a 100% clear. So I think it's very important that when people adopt projects and spend public money towards the adoption, it's important to understand the supply chain and how's liability allocated and who's accountable when something goes wrong? Because I see it, for example, for people who are fine-tuning models, and fine-tuning means using a little bit of your data to retrain a model that already exists, to make it responsive and useful for your own data. Well, when you do that and then the supplier comes to you, you don't know who's responsible and what capacity has the supplier that you're talking to fix things? And more importantly, to identify bad things before they happen.

It's definitely something that needs to be well taken care of. And the other thing is that there's some early indication, this is definitely not settled and nobody knows what's going to happen, but the upper layer or the upstream of the AI supply chain can be quite concentrated. It can be just a few suppliers worldwide. So everybody's going to be relying on a few very large models. So those models might result in mistakes that everybody's making at the same time. So this can be a bit more systemic than just having some problem in one particular bureaucracy or city. These things can be a little more systemic.

Stephen Goldsmith:

Seems to me, listening to you, somebody ought to be selling AI tools to hold AI accountable. AI solution that will examine my other AI uses for algorithmic bias or privacy violations or whatever. It'd be impossible as an individual to figure it out.

Luis Videgaray:

There's a lot of good work in academia and some very good companies out there that are addressing issues that are real problems with AI. For example, making complex neural networks explainable so that people understand why some predictions or recommendations comes from AI to ensure that biases are identified and corrected. So, there's a lot of good work and there are many tools, but my appreciation is that that's still work in progress. There is, unfortunately, there's no silver bullet that you say, "I buy this tool, or I hire these guys who are very smart and I've addressed these concerns and I've made my AI in my organization unbiased." That is not true. This is a process. This is something that is ongoing. And this very complex systems can behave in unpredictable ways sometimes, particularly when the data they're faced with is not something they see frequently in the training or in previous cases when dealing with minority or things that are a little bit different from the most of what the data they usually see, there can be problems. And the tools are not perfect.

So, it's very important that organizations realize that the tools are extremely helpful, and I do encourage to go and look for these tools and try to be ahead of the game, but also with full acknowledgement that this is not going to be perfect and the risk remains and things change and evolve very rapidly. So people need to be always mindful and always focused on this. It's not something that you fix and go.

Stephen Goldsmith:

So we're about out of time. We've got lots more questions. Let me just ask you one more. We've just finished a case study about the data office in Mexico City, and we've also written in the past about New York City. If you put aside those really sophisticated, large, well organized offices, what's the major challenge that the smaller cities, and by smaller I mean 2 million to 200,000, those cities, is it institutional, is it technological? And let's say that there's these regulatory issues which need to be addressed, but just in terms of the application of AI to the everyday processes, the three tiers that you mentioned, what would you recommend to these other cities that they do, structural, institutional, technological capacity so that they can take advantage of the benefits of generative AI?

Luis Videgaray:

Well, definitely no city, even though the very large ones like New York City, Mexico City, London, even those very large organizations are not going to be creating AI. Are going to be probably customizing AI, perhaps doing some fine-tuning, but that's it. It's not going to be building models from scratch because pretty much nobody's going to be building models from scratch. So this means that the key function is procurement and understanding what the options are, what the risks are, and in order to have good procurement, you need to have a plan.

This is not just about, okay, we have this immediate need, how do we solve it? Do we jump into AI bandwagon? The most important thing, and it doesn't matter if you're a large organization or a mid-size city or even smaller, is just, what's your plan? What do you want to do? Where are you going to go? So, be sure to have a plan.

Number two, this is not the time to say, because we're going to be buying AI tools from the outside, we don't need an AI department. We don't need data science within the organization. It's the opposite. You need to become smarter and you need to understand the technology better.

But number three, and this is, I think, this is extremely important. Excuse me to be repetitive, you need to make this a leadership question, starting at the top of the organizations. The adoption of AI in cities is not something that to be delegated, just to the, "technical people," that's not appropriate. This is a technology that can transform. If you think of level three type of deployment, can really transform the organization. And it's important that leaders in cities understand the basics. They don't need to become coders. They don't need to understand the math behind AI, but it's important that they understand what the technology does, how it can change things. It's important that the city managers, mayors, people who make decisions, become more knowledgeable about the technology. This is not like buying a new data application software or just some new CPUs for the organization.

Stephen Goldsmith:

This has been a terrific session, so much that you've mentioned that city officials will need to pay attention to. I'm not sure exactly if I believe you're foreign minister in finance ministry. You sound more like a technological minister.

Luis Videgaray:

That's what happens to you when you spend five years at MIT.

Stephen Goldsmith:

Come over to the Kennedy School and we'll reintroduce you to government at the same time. But this is Steve Goldsmith, professor of Urban Policy with Luis Videgaray. Enormously insightful comments on what government can do to improve itself using these new tools. Thank you very much, Luis, for your time. We appreciate it.

Luis Videgaray:

Thank you, Steve. It's a pleasure.

Betsy Gardner:

If you liked this podcast, please visit us at datasmartcities.org. Find us on iTunes, Spotify, or wherever you get your podcasts. This podcast was hosted by Stephen Goldsmith and produced by me, Betsy Gardner. We're proud to serve as a central resource for cities, interested in the intersection of government data and innovation. Thanks for listening.