December 13, 2024

mipueblorest

Technologyeriffic

The new frontiers of AI and robotics, with CMU computer science dean Martial Hebert

[ad_1]

Martial Hebert, dean of the Carnegie Mellon University School of Computer Science, during a recent visit to the GeekWire offices in Seattle. (GeekWire Photo / Todd Bishop)

This week on the GeekWire Podcast, we explore the state of the art in robotics and artificial intelligence with Martial Hebert, dean of the Carnegie Mellon University School of Computer Science in Pittsburgh.

A veteran computer scientist in the field of computer vision, Hebert is the former director of CMU’s prestigious Robotics Institute. A native of France, he also had the distinguished honor of being our first in-person podcast guest in two years, visiting the GeekWire offices during his recent trip to the Seattle area.

As you’ll hear, our discussion doubled as a preview of a trip that GeekWire’s news team will soon be making to Pittsburgh, revisiting the city that hosted our temporary GeekWire HQ2 in 2018, and reporting from the Cascadia Connect Robotics, Automation & AI conference, with coverage supported by Cascadia Capital.

Continue reading for excerpts from the conversation, edited for clarity and length.

Listen below, or subscribe to GeekWire in Apple Podcasts, Google Podcasts, Spotify or wherever you listen.

Why are you here in Seattle? Can you tell us a little bit about what you’re doing on this West Coast trip?

Martial Hebert: We collaborate with a number of partners and a number of industry partners. And so this is the purpose of this trip: to establish those collaborations and reinforce those collaborations on various topics around AI and robotics.

It has been four years since GeekWire has been in Pittsburgh. What has changed in computer science and the technology scene?

The self-driving companies Aurora and Argo AI are expanding quickly and successfully. The whole network and ecosystem of robotics companies is also expanding quickly.

But in addition to the expansion, there’s also a greater sense of community. This is something that has existed in the Bay Area and in the Boston area for a number of years. What has changed over the past four years is that our community, through organizations like the Pittsburgh Robotics Network, has solidified a lot.

Are self-driving cars still one of the most promising applications of computer vision and autonomous systems?

It’s one very visible and potentially very impactful application in terms people’s lives: transportation, transit, and so forth. But there are other applications that are not as visible that can be also quite impactful.

For example, things that revolve around health, and how to use health signals from various sensors — those have profound implications, potentially. If you can have a small change in people’s habits, that can make a tremendous change in the overall health of the population, and the economy.

What are some of the cutting-edge advances you’re seeing today in robotics and computer vision?

Let me give you an idea of some of the themes that I think are very interesting and promising.

  • One of them has to do not with robots or not with systems, but with people. And it’s the idea of understanding humans — understanding their interactions, understanding their behaviors and predicting their behaviors and using that to have more integrated interaction with AI systems. That includes computer vision.
  • Other aspects involve making systems practical and deployable. We’ve made fantastic progress over the past few years based on deep learning and related techniques. But much of that relies on the availability of very large amounts of data and curated data, supervised data. So a lot of the work has to do with reducing that dependence on data and having much more agile systems.

It seems like that first theme of sensing, understanding and predicting human behavior could be applicable in the classroom, in terms of systems to sense how students are interacting and engaging. How much of that is happening in the technology that we’re seeing these days?

There’s two answers to that:

  1. There’s a purely technology answer, which is, how much information, how many signals can we extract from observation? And there, we have made tremendous progress. And certainly, there are systems that can be very performant there.
  2. But can we use this effectively in interaction in a way that improves, in the case of education, the learning experience? We still have a ways to go to really have those systems deployed, but we’re making a lot of progress. At CMU in particular, together with the learning sciences, we have a large activity there in developing those systems.

But what is important is that it’s not just AI. It’s not just computer vision. It’s technology plus the learning sciences. And it’s critical that the two are combined. Anything that tries to use this kind of computer vision, for example, in a naive way, can be actually disastrous. So it’s very important that that those disciplines are linked properly.

I can imagine that’s true across a variety of initiatives, in a bunch of different fields. In the past, computer scientists, roboticists, people in artificial intelligence might have tried to develop things in a vacuum without people who are subject matter experts. And that’s changed.

In fact, that’s an evolution that I think is very interesting and necessary. So for example, we have a large activity with [CMU’s Heinz College of Information Systems and Public Policy] in understanding how AI can be used in public policy. … What you really want is to extract general principles and tools to do AI for public policy, and that, in turn, converts into a curriculum and educational offering at the intersection of the two.

It’s important that we make clear the limitations of AI. And I think there’s not enough of that, actually. It’s important even for those who are not AI experts, who do not necessarily know the technical details of AI, to understand what AI can do, but also, importantly, what it cannot do.

[After we recorded this episode, CMU announced a new cross-disciplinary Responsible AI Initiative involving the Heinz College and the School of Computer Science.]

If you were just getting started in computer vision, and robotics, is there a particular challenge or problem that you just couldn’t wait to take on in the field?

A major challenge is to have truly comprehensive and principled approaches to characterizing the performance of AI and machine learning systems, and evaluating this performance, predicting this performance.

When you look at a classical engineered system — whether it’s a car or an elevator or something else — behind that system there’s a couple of hundred years of engineering practice. That means formal methods — formal mathematical methods, formal statistical methods — but also best practices for testing and evaluation. We don’t have that for AI and ML, at least not to that extent.

That’s basically this idea of going from the components of the system, all the way to being able to have characterization of the entire end-to-end system. So that’s a very large challenge.

I thought you were going to say, a robot that could get you a beer while you’re watching the Steelers game.

This goes to what I said earlier about the limitations. We still don’t have the support to handle those components in terms of characterization. So that’s where I’m coming from. I think that’s critical to get to the stage where you can have the beer delivery robot be truly reliable and trustworthy.

See Martial Hebert’s research page for more details on his work in computer vision and autonomous systems.

Edited and produced by Curt Milton, with music by Daniel L.K. Caldwell.



[ad_2]

Source link