Data-Smart City Pod

The Complicated Interplay: AI and Government

Episode Summary

Professor Steve Goldsmith interviews MIT professor and dean Dan Huttenlocher on artificial intelligence, use cases for generative AI in government, and balancing the human with the digital in a bureaucracy.

Episode Notes

In this episode Professor Goldsmith interviews Professor Dan Huttenlocher, inaugural dean of the MIT Schwarzman College of Computing and expert  on artificial intelligence and computer science. They discuss the different ways that generative AI could be used by governments, in service of constituents, and what kinds of operational standards are required for the productive and safe use of AI technologies. 

Music credit: Summer-Man by Ketsa

About Data-Smart City Solutions

Data-Smart City Solutions, housed at the Bloomberg Center for Cities at Harvard University, is working to catalyze the adoption of data projects on the local government level by serving as a central resource for cities interested in this emerging field. We highlight best practices, top innovators, and promising case studies while also connecting leading industry, academic, and government officials. Our research focus is the intersection of government and data, ranging from open data and predictive analytics to civic engagement technology. We seek to promote the combination of integrated, cross-agency data with community data to better discover and preemptively address civic problems. To learn more visit us online and follow us on Twitter

Episode Transcription

Betsy Gardner:

This is Betsy Gardner, editor of Data-Smart City Solutions at the Bloomberg Center for Cities at Harvard University. And you're listening to Data-Smart City Pod, where we bring on the top innovators and experts to discuss the future of cities and how to become data smart.

Stephen Goldsmith:

Welcome back. This is Steve Goldsmith, Professor of Urban Policy at the Harvard Kennedy School with another one of our podcasts. We have a terrific guest today and we intend to discuss with him artificial intelligence and government, which is a large, emerging and critical conversation. I'll be speaking with Dan Huttenlocher, the Inaugural Dean of the MIT Schwarzman College of Computing. He is also professor of electrical engineering, computer science at MIT. Dan, thanks for being here.

Dan, there's been so much conversation about ChatGPT and generative AI and other forms. We've been looking at how those technologies affect large bureaucratic organizations like municipal government where there are lots of agencies, lots of bureaucratic levels and hierarchy, so many procedures. What would you think about the positive applications of AI is such an environment?

Dan Huttenlocher:

Yeah, it's a terrific question. My answer will be a little bit about what should people be working towards now, because this technology's changing so quickly, but some of this is even plausible today. I think that the way I think about generative AI technology is how can it be used to really help and augment people doing a better job of their jobs? I think there's been a lot out there in the news about AI replacing people, and I don't think that that's really, certainly in the medium and long term we need to worry about that, but I don't think that that's the operational thing at the moment. Partly because in the end, people who do these jobs in whatever large organization it is, and then their managers and the leaders of those organizations are responsible to whoever they're serving.

So in a city government, they're responsible to their constituents. If it's serving the citizenry of the city, they're responsible to other agencies, if they're serving other agencies or whoever their customers or whoever they're supporting is, and we're nowhere near the place where we can delegate to AI that kind of responsibility. That kind of responsibility belongs with humans.

I personally believe it will belong with humans well past my lifetime and maybe well past many others. You certainly hear people saying other things other than that, but I don't really think that that's where we should be looking right now. And I think there are amazing things that one can start to do with generative AI technologies in this regard. So one of those is just as a sort of smart assistant in anything that you're doing, the ability to really be able to bounce ideas off of people, you have some question you're trying to answer, it's a little hard to figure out how to think about it, you go sort of ask colleagues in the hallway, maybe it's not worth convening a whole meeting of a bunch of people, these are the kinds of places where some of these generative AI techniques, these generative AI chatbots and others can be very helpful. But brainstorming with people, they can suggest somewhat crazy things sometimes. So it really is sort of in this, help me think about this mode, rather than this, do this for me mode, if that makes sense.

Stephen Goldsmith:

Yeah. That's interesting. Let me try to ask a follow-up question and maybe cloaked a little bit in Kennedy school language, not MIT language. Bureaucracies have advantages, certain advantages, they restrict people from abusing their discretion. There's lots of routines that require lots of pieces of paper and approvals often infuriate the public because the processes seem mindless. I'm trying to envision how you would allow more discretion on the part of a person in a bureaucracy, exercise by that person who touches the public, who's in a retail business, but you still need to be able to manage those field employees to make sure they're exercising the discretion correctly. So from a supervisor's standpoint, how could you accommodate this newfound discretion and still hold employees accountable? Is that even possible, do you think?

Dan Huttenlocher:

Absolutely. I think that there's a huge opportunity with advances in AI basically to serve whoever an organization is trying to serve more effectively. And serving, be they customers of a for-profit institution, be they the general public in city government or specific, if they're a permitting office and you mainly serve businesses, it's a complicated interplay. As you said, bureaucracies are important. You don't want sort of unlimited individual discretion. There are rules to be followed. Each individual is representing that government, and if they say something and it's wrong or misleading, that's almost worse than somebody not being able to get an answer at all. And so the bureaucracy's an important piece, but the bureaucracy also is sort of inherently rigid, it's always a little bit behind the times and anything that's moving quickly. And so these are places where I think the combination of humans and artificial intelligence can stand to just provide much, much better customer service.

It goes both directions. So you could imagine an AI chatbot that's directly providing answers to things to the public or to a customer in a for-profit setting. That unsupervised is problematic. But as we were just saying, in a bureaucracy, the supervisors are supervising the individuals. So you can have supervisors supervising a bunch of chatbots also. That's sort of one way to look at this. Another way to look at this is that the chatbots themselves can be looking at the answers that humans are giving and sort of flagging places where this looks inconsistent to them, to the chatbot, with what the policies actually are. But it's going to take rethinking these kinds of things. But these are places where I think the combination can provide much more better service, both in terms of the accuracy and the things that require supervision, but also in terms of the timeliness and the relevancy and reducing the frustration level frankly for the city frontline workers who just feel like they're often hamstrung by pages of regulation that they have to follow when they're really just trying to help constituent with something. So I think there's a lot to do here, but these are still design problems. It's not like they're sort of technology off the shelf that a city can go use, but I think this is the time for cities to be thinking creatively about how to start these things.

Stephen Goldsmith:

One day when I was Deputy Mayor of New York, I went into a performance management room for the child welfare department, which was very sophisticated in New York City at the time, and I watched the operation of that stat program. When I was leaving, I visited with a couple folks at the desk where the supervisors were, and on those desks were stacks of literally stacks of files. And I thought to myself, there's no way that those supervisors in child welfare, no matter how professional they are, could consume all of the necessary information in those files to determine which of their employees were outstanding and which ones needed remediation. But to your point, what would happen if ChatGPT consumed that information, but for purposes of informing the supervisor about corrective action or superlative performance, would that be such an application?

Dan Huttenlocher:

Yes, and I think we're still at a point where the summarization capabilities of these generative models like ChatGPT are very impressive. It can create a short summary that reflects a lot of more extensive information, and frankly, those summaries can be tuned to things like highlighting things that look like maybe they're an exception or an outlier or something that someone should pay attention to. So it's not just summarizing, but it's summarizing in a way that's informative to somebody like a case supervisor who's supervising a whole bunch of interactions at something like a child welfare organization. But at the same time right now, and know this has been covered in the news a lot, these things are not completely accurate, but neither are humans. And so sometimes I think that we're expecting a little too much out of this technology. And what we really need are procedures, and this is the place where I think bureaucratic organizations have an advantage.

We need procedures for how do we take this technology? It's very powerful, but it's not flawless, just like humans are not flawless. And so we need to think about how we integrate these powerful technologies in with humans and human judgment, and I think we will end up with much, much better outcomes.

A colleague of mine makes a sort of distinction that I found quite useful, which is that individual humans are fallible. So you can think of a lot of what a bureaucracy does is it's trying to make use of a lot of humans in serving some group of people, but accounting for the fact that people make mistakes. People may also have agendas and stuff that you need to look out for, but let's just for the moment assume that everybody is really doing their best to serve the public well. But the flip side is that these algorithms, generative AI, AI more generally, but even other existing algorithms before have a certain kind of fragility to them. They don't have the sort of judgment and other things that humans apply. And so it's really combining algorithms where you really need things like human intuition, empathy, judgment, other things that are not present in AI or in other algorithms combined with I think the ability to really help address the fallibility of a bunch of individual humans trying to do these jobs and by themselves.

Stephen Goldsmith:

Another question that many of our students and public employees have been wrestling with is the challenge of algorithmic bias. But of course, algorithmic bias exists because people are biased. Algorithms train on bias data. So you can't just say that technology's biased or the people are biased is a question of how they interact. So how can we be careful about the bias, but at the same time maybe use a generative AI to uncover the bias in data or people?

Dan Huttenlocher:

I think it's very important to have procedures in place for any kind of automated approaches to inspect for bias and to address that, but it's also important in things that individuals are doing. So if you look at things like the criminal justice system, you need to look at prosecutors and judges and others just like you need to look at algorithms for what kind of biases might be affecting their decision making. And I think by putting more processes in place that can combine both automated techniques like AI techniques and human decision making, it just makes explicit more of those issues and the inspection for them, and then the combination of them can really help address bias. As important as bias is in things, I think there are a lot of other aspects of AI that are similar, back to sort of fallibility of human decision making, which bias is sort of another example of that. It's very important that we have processes in place that can lead to better outcomes through the combination of AI and human decision making.

Stephen Goldsmith:

Dan, just one or two more questions. I know you have experience with large private organizations and we advise governmental entities mostly at the city level. How would you structure a large governmental office to make the best and safest use of AI? Where should that be structurally? How would it be located? Who should own that work? What models have you seen in a large private bureaucracy and what are the trade-offs associated with that model?

Dan Huttenlocher:

Yeah, this is a terrific and very important question. There's no simple answer because I do think many technologies, the private sector drives and will continue to drive the development of those technologies. But the use of those technologies really does need to be the responsibility of whatever organization, be that private sector organization, a civil society organization, a university, city or state or federal government. The responsibility for that use needs to rest with those organizations. And I think that that responsibility has to be shared within those organizations. One piece is some kind of central office that has expertise and understanding of these AI methods and what they can be useful for, but also the specific domain in which it's being applied because that's central office, while it may have some AI expertise, doesn't have the domain expertise, and this is true of all advanced technologies in the digital realm, you can build some wonderful piece of technology and it doesn't actually help the organization get its job done better unless it involves stakeholders in the organization.

And this is equally true, maybe even more true with AI because AI is helping sort of, I think of AI sort of amplifying the work that individual humans do. And if you amplify the bad instead of the good, this is particularly problematic. And so I think that some AI office is important, but that should not be the office that's sort of responsible for and overseeing the deployment of these projects in city government. I think that has to be owned by the organizations that are doing that work, together with the central office that's bringing the expertise, and there's going to have to be some commercial services that are providing the actual technology. I think that there can sometimes be use of open source technologies also, but I think one needs a lot of expertise to use those in ways that really are appropriately vetted and safe once you're providing services to the public.

So I think some mix of these, very important that offices that are doing this take a leadership role. Maybe that's a little abstract. So let me try just one direction I think is particularly useful. I don't think anybody in local government, in state or federal government, in for-profit organizations, I don't think any of the users of those services thinks that they get great customer service. I mean, we just have a problem with great customer service and great access to information and helping me solve my problem. So I do think that customer service broadly construed is a place where AI has going to make huge strides and really making it much more effective. So when you think of any government agency that has a large volume of inbound requests for helping constituencies or others that they're serving solve problems, those kind of things that fit in a kind of customer service sort of paradigm, I think that's a huge place where one can really do much better, whether it's a for-profit or government or nonprofit.

Stephen Goldsmith:

Another aspect of accountability I thought about during your answer is how one would use AI to read, if you will, all of the reports, all of the pieces of paper that are generated in an agency or by a certain group of individuals and evaluate it, right? So could you say to Chat, please review all this material and help me identify and then fill in the word A, B, or C so that you could consume much greater amounts of information and identify the trends or the indicators in that information? Is that possible? Could we think about how that actually improves accountability?

Dan Huttenlocher:

Exactly. These machine learning algorithms can ingest a huge amount of data, give back summaries, but also as we were talking about in a different context, give back summaries that are sort of looking for particular things. So if you're a nonprofit that's looking for certain kinds of things the city government might be doing that disagree with, or that you think are inconsistent with what their commitments are to their constituency, people will be able to do large scale searches like that in a way that they cannot today because it's really humans reviewing this material.

But the flip side of that, it's an amazing opportunity for, I mean, again, cities are all short on resources, but it's an opportunity for cities to look at critical things where they know that they're likely to get called on how they're doing in something and get out ahead of it and start doing some of this themselves. And it will help them, if the city government can have a win or two around something that really matters to certain important voices in their community, that they sort of found themselves and were proactive on by using some of these techniques rather than waiting for others to do it, I think it can be a huge win. And it's just better governance in end regardless of where it comes from. But I think it's an opportunity to get ahead.

Stephen Goldsmith:

Dan, our podcast goes mostly to nonprofit and government employees, folks who care a lot about the performance of their services and the quality of life in their communities. Three or four years from now, what would be the most important thing that we might have accomplished from generative AI that would actually advance the delivery of services in these public facing organizations?

Dan Huttenlocher:

So one of the things that I always like to remind myself of when there's a big sudden change in technology is that with big changes in technology, we tend to overestimate the short term impact and underestimate the medium and longer term. It feels like it's going to be immediate, and everyone's sort of in a land rush. Everybody's running. I mean, if you think about the internet and how long it took for that to have broad effect, it was like 1993, 4, it was 30 years ago, and it certainly took 15 years or so before that had big broad impact and maybe even more in many government settings. So I think that these technologies will continue to evolve very quickly over the coming 3, 4, 5 years. We've really only had about eight years or so of AI technology being deeply integrated into things that we're doing and the generative AI technology really only a year or so.

But the impact more broadly, I think is we're looking at the 10, 15, 20 year timeframe, and it's really trying to find those places where we can have early impact that's positive in the short and in the long term. And just the sort of caution I would say there is, we all should always remember with these new technologies, how much excitement there was around the opportunity for social media to really democratize and empower the disenfranchised in the early days of social media. And that's not the picture that most of us have today of social media. You just have to think about the positives and the negatives of these technologies from the very beginning. That doesn't mean don't pursue them. I think one should pursue them for their positives, but really looking to try to make sure that we're staying focused on how to deliver the positive outcomes rather than getting excited about the positive and running off in pursuit of that without thinking about other aspects. And I think that will be very true with AI and something that frankly, governments and nonprofits can play a very important role in generally.

Stephen Goldsmith:

Dan, there's so much here, so much opportunity, so many challenges. Let me just say we really appreciate you being on the podcast and I'm looking forward to working with you on these topics going forward. Thanks again.

Dan Huttenlocher:

Good to see you. Thanks.

Betsy Gardner:

If you liked this podcast, please visit us at datasmartcities.org. Find us on iTunes, Spotify, or wherever you get your podcasts. This podcast was hosted by Stephen Goldsmith and produced by me, Betsy Gardner. We're proud to serve as a central resource for cities, interested in the intersection of government data and innovation. Thanks for listening.