Michael I Jordan

Rethinking Machine Learning

Machine Learning, or ML, is attributed as a natural outgrowth of the intersection of Computer Science and Statistics. Machine Learning builds on both and focuses on the question of how to get computers to program themselves from an initial structure and then experience.  Yet merely thinking about Machine Learning in such stark  outlines misses the question of  how we can build systems that are attentive to social welfare, create new economic markets and solve large scale problems that enhance human life.

Our guest in this episode is Professor Michael I. Jordan, who is the Pehong Chen Distinguished Professor in the Department of Electrical Engineering and Computer Science and the Department of Statistics at the University of California, Berkeley. His research interests bridge the computational, statistical, cognitive and biological sciences, and he is considered the world’s foremost authority on ML.

Guest: Michael I. Jordan

Hosted by: Alexa Raad & Leslie Daigle

Related resources

Transcript (auto-generated by Descript)

Alexa Raad: The term artificial intelligence was coined in 1957. That year an AI proof of concept, a program called logic theorist was developed by Alan Newell, cliff Shaw and Herbert Simon, and presented at the Dartmouth summer research project on artificial intelligence, hosted by John McCarthy and Marvin Minsky, Minsky and McCarthy both later recognized as fathers of AI described artificial intelligence.

As any tasks performed by a machine that would have previously been considered to require human intelligence so far though, no software or entity comprised of software and hardware has in fact, exhibited human level intelligence and cognition. Most of what is labeled AI today is in fact, a subsection of AI called machine learning machine learning or ML is attributed as a natural outgrowth of the intersection of computer science and statistics.

If the defining question of computer sciences is a given problem solvable. And if so, how do we build a machine to solve it? And if the defining question of statistics is what can we deduce or infer from a set of data and a set of modeling assumptions and with what reliability then machine learning builds on both and focuses on the question of how to get computers, to program themselves from an initial structure and then experience.

And incorporate additional computational architectures and algorithms that can better capture store index, retrieve, and merge these data and how to orchestrate learning sub-tasks into a larger system. Yet, even this is a very simplistic definition. Merely thinking about machine learning in such stark outlines, misses the question of how we can build systems so they can deliver positive results while avoiding unintended negative consequence.

Perhaps we ought to think about machine learning as a new discipline and engineering one that incorporates fields as varied as economics, humanities, and social sciences. Maybe then with this new construct in mind, we can develop machine learning systems that are attentive to social welfare, create new economic markets and solve large scale problems that enhance human life.

Leslie Daigle: Michael Jordan is the P Hong Chan distinguished professor in the department of electrical engineering and computer science and the department of statistics at the university of California, Berkeley. He received his master’s in mathematics from Arizona state university and earned his PhD in cognitive science in 1985 from the university of California, San Diego.

He was a professor at MIT from 1988 to 99. Professor Jordan is a member of the national academy of sciences, a member of the national academy of engineering and a member of the American academy of arts and sciences. He is a fellow of the American association for the advancement of science. He has been named a name and lecturer and a medallion lecture by the Institute of mathematical statistics.

Professor Jordan was a plenary lecturer at the international Congress of mathematicians in 2018. He received the. Render prize from the American mathematical society in 2021. The I triple E John Von Neumann medal in 2020. The HCI research excellence award in 2016. The David D Broomall heart prize in 2015 and the ACM atrium triple AI Allen Newell award in 2009.

He is a fellow of the triple AI ACM, ASA CSS. I Tripoli IMSD, ISB and Siam. Welcome.

Michael I. Jordan: Thank you.

Leslie Daigle: So let’s start with some definitions. First, in addition to what Alexa said in the intro, what is machine learning? What is AI, why are they confused with when other so often

Michael I. Jordan: I’ll be happy to try to give you some definitions, but I must say I don’t really like definitions and I don’t think we tend to operate with definitions in mind.

I think there are intellectual trends that reflect our era. And I like to think of our era as lasting, like a century. That’s kind of the intellectual era we’re living in. And the last century has been full of new developments. You know, information sciences, broadly speaking computer science the growth of statistics economic science arose in this year.

And this had huge implications for human beings. We’re now living with these kinds of ideas in our midst and, you know, control systems of all sorts. And also the growth of a whole, a whole branch of information technology is kind of the, the phenomenon, perhaps the most striking change in our last say 10 or 20 years is that.

The the, the, the magnitude of it, the growth and scale of data, the availability of data in all areas of human inquiry and discourse, sciences technology and the granularity of it. There’s now data about, you know, each individual human there’s data about each individual gene in the genome about each region of the sky.

And that’s qualitatively different than previous eras, even though some of the ideas and thinking were the same the scale of it and the specificity of it, the scope of the inquiry has now shifted. But I think of our era as that and the implications for systems, indeed, I really liked the introduction.

It’s less about a specific computer or how an entity is that’s intelligent behaves, and it’s more about the overall system that we’re all part of, which may be planetary and scale. The system that you know, had that underlies finance the system underlies a worldwide commerce. Transportation.

These systems have computers and data and flows and people you know, involved in helping with decision-making. And I think that’s the right scale and scope to think about in our era and the, and the trends behind that are, are definitely not just AI of the notion that you’d put intelligence and thought into a computer.

It’s operations research, it’s control theory, it’s economics it’s certainly statistics and, and comp computing kind of as an infrastructure line behind all of that. And it’s also the human experience of living with all of this, you know, being in the midst of computing. And developing an understanding of that developing legal structures to think about it.

And so that all that big phenomenon is certainly not subsumed within the field of AI AI, I still think of as an intellectual aspiration, it’s almost a philosophical one. It’s thinking about what would it be like if a computer could really think like us, you know, reason plan and so on. And I still think it’s an interesting.

Absolutely fascinating philosophical aspiration. It just hasn’t happened yet. It’s not even clear what the glimmer is. That’s going to sort of start to make it happen. But in the meantime, computers can do things that are, you know striking and in some sense, intelligent, I mean, a computer could compute the digits of PI.

I can’t do that. So is the computer more intelligent than me? Well, for some narrow kind of tasks, for sure. And that’s what we want. It can calculate things that we could never calculate. But that doesn’t somehow mean that we’ve been surpassed and intelligence. It just means we have kind of a new tool in the universe.

And so, so machine learning itself is a terminology that I’m only partially I’m a little more fond of, as the introduction alluded to, it’s kind of a merger of statistical or inferential thinking with computing. But the, the seeds of that have been present for a couple hundred years ago, gals the original statistician, if you will, or one of the original one.

Who was doing astronomy with the kind of methods we use today and doing it on, you know, with computing of the day would recognize what we’re doing now. You know, so it it’s conceptually, not all that different, but,

Leslie Daigle: but yet I get the feeling that, that part of what you’re saying is that a difference in as a friend of mine used to like to say a difference in number, it can mean a difference in kind, I mean, while Yohanas coupler would, would recognize that this is the study of where the stars are in the sky.

What’s available by way of data. And therefore the inferences that can be drawn are really a whole other world.

Michael I. Jordan: Something qualitatively has changed. It’s not clear though, that conceptually really we’re using gradient descent methods and we’re using large amounts of data to find patterns. And again, our, our forebearers you know, the gals of the world would recognize all of that.

But I think this thing I tried to get at about the specificity, that it’s granular data. So it used to be that, you know, if you’re a physicist, you’d study data, You know, the motion of objects, you know, and there’s, the laws ethic was Emay that would characterize all objects. So that was the goal of, of science.

And now you have motions of particular kinds of objects and particular kinds of situations, and you want to have a specific laws and then it’s even way more true about human beings. You know, the idea that you would have a single law for all humans seems a little kind of outdated, almost. But there was not data to do anything different.

Now there’s data about humans in all kinds of situations, including the commercial ones and social ones of all kinds. And you can start to actually do a little science at that level of something more specific. So I’d argue, that’s actually, what’s changed that it’s qualitatively different in that it’s now specific and granular and that’s kind of the kind of inference.

And if you’re a business, of course, you’re trying to do personalization. That’s one of the core ideas of beauty. And that reflects the fact that I’m not going to build one business for everybody. I’m going to a different from different people. I make more money doing that. So that that’s also, if you will qualitatively different, but the, but it definitely do not think of AI has Kevin come up with great ideas or machine learning that were qualitatively you know, breakthroughs.

I don’t think that’s happened and that’s changed everything. It’s rather the cumulation of scale and scope as you’re talking about together with some pretty good ideas, have unleashed some things. And I do believe in a hundred years, we’ll look back and there will have been some qualitative conceptual breakthroughs.

I just don’t know what they are.

Alexa Raad: Speaking of scale, you yourself have pointed out that. Advances in machine learning have powered innovative products and services from companies like a Google, Netflix, Amazon, you know, and others, you could argue that machine learning is fundamental to their business model.

As you just said, you know, there’s so much data and also so much of it is just so granular, but has ha have those advantages now, given an unfair advantage to these. Big companies, the big tech, because they consume so much time.

Michael I. Jordan: Yeah, that’s a great question. I, I, my answer is going to be no maybe slightly surprising way.

First of all, I really want to distinguish between a an Amazon business model or maybe Alibaba in China to make this a little more international. And the Google business model and Facebook, their business model fundamentally is about advertising. That’s how they make the rules. Right. And that goes back to the, you know, you went to television arose, it needed to be free.

You didn’t want to be able to pay for it. So you had to think of a way to monetize it. You did that with advertising. So Google, Alibaba, you’re talking about Google and not Alibaba. So Google and Facebook, it’s wise a little bit more. Free services and you’ve got, gotta make money somehow. And so surprisingly, you can kind of corner the market on advertising.

So in that sense, it’s unfair. If your advertisers out there dominated, it’s unfair to them, but I, I guess I don’t care so much, but you know, Amazon and then Alibaba in China, they bring packages to people’s door. Right. And that’s a whole different kind of business model, a whole different kind of service.

And what about whether it’s more healthy to human beings? You know, you could argue, I mean, bringing search boxes information is useful to people. But when you’re really bringing packages to people’s door it’s, you’re doing somebody economics. You’ve got something that people are willing to pay for a little bit.

Right. And you’re now in the kind of the world of providing a real business model and, and all the things that come with that you create markets, you create links between producers and consumers, data flows on the basis of that. You create computing infrastructure to support all of that data flows on the basis of that.

And I think of that as a much more natural kind of business model. And to scale that and to make it viable and successful and safe and useful, you’ve got to do all kinds of machine learning or the hood. So particular, you’ve got to model a fraud. You know, people are putting their credit cards into an e-commerce site.

You know, you got to make sure that, you know, what’s fraud and what’s not, you gotta model supply chains because if you’re serving a billion products to a hundreds of loads of people, which is in fact what these companies are doing, you got to know where all the products are in the supply chain at all moments.

And that’s now unbelievably scalable. You know, relative to what companies used to do. So you know, those companies had to gather all that data and make all those models and build all those infrastructures to be able to do e-commerce at that scale. Now, do they have an unfair, unfair advantage for e-commerce?

Well, no, they have an advantage, but it’s fair. They, they built it, they did it, and they were bringing value to people. Does that also give them an unfair advantage for things like you know, natural language processing or things that like academics might want to work on another domains because just have all this data.

Yeah. I don’t, I don’t, I don’t believe so. If I want to build a system that does, you know, medical predictions or. Or does something, you know, some other business model fit to do with, you know, social interactions among people or whatever. I got to collect that data myself and, you know, and maybe the company has some had started, but rarely is their data granular enough for the phenomenon I care about in my little business while they’re on my little pieces.

So actually I think it’s actually quite surprising that, that the companies don’t have data for for a lot of the new ideas or science ideas or, or business models that you know, there’s plenty of room and niche for smaller companies to merge, but also for academics to do all kinds of research.

Yeah. And, and I don’t think that has changed. In fact, what Google has started to do at some point is they have so much language, data. They provided things like, you know, a natural language service, if you will, that does translation and they make it free and they advertise against it. So it’s not even yet a business model.

But, but now that becomes a commodity. If I’m doing natural and it’s, I can build on top of that and then do more. And so it’s no longer a natural advantage for them in terms of any kind of business. And

Leslie Daigle: it strikes me that a lot of the tension in the space around AI and I use that term intentionally.

And, and these questions is the sense that somehow there’s somebody getting ahead ahead of where we are for some value of wheat. So whether it’s an, an unfair advantage, because some, you know, there’s an irreducible, large, granular set of data, or whether it’s, you know, I don’t understand. What, this is how his analysis is done.

You know, whether it’s computing, the digits of PI or which we do know how to do, but you know, there’s the sense of if I can’t touch it, smell it, feel it, or build it myself then. Is it somehow, is that a problem? And that’s okay. I think that’s where we’re seeing some of the tension around fear of AI. It has been expressed by a number of.

Leading figures in industry and maybe goes a little bit to the, to the heart of, is it an unfair advantage?

Michael I. Jordan: There’s too much packed in there. I’m going to disagree with about half of that. You know, the fear of AI that expressed by something, because initially I think you mean like Elon Musk or whatever, and no one in AI believes that stuff just to, you know, and Alon, you know, the genius of our time and so on, but doesn’t know much about it.

And, and, and thinks that it’s thought at computers and we should interface to the brain and has all these crazy ideas about it. Fun, crazy stuff, but science fiction for hundreds of years. So let’s discount that, you know, fear of AI, what we have fear of, you know, individual humans have fear of all kinds of things, including vaccines, including, you know, I think all kinds of things out of, out of our control.

Right. And so I, again, we could talk about that, but I don’t think that’s really what the fear is. Fear of monopoly. You know, that’s not a fear. That’s part of, you know, economic systems that we have to have regulations of government. We have to have discourse about that, but let’s as nothing to do with the Elon Musk’s fear of AI you know, do some companies have advantages because of discoveries in AI that they kind of hold and then they can exploit.

Right. And, and I’m going to argue very much. No, because AI is still mostly done at the I’m using the term AI again, just to kind of help out with the discussion. But the development of the algorithms and the infrastructure and all that, it’s all done completely openly. All the work is mostly done still in academia.

And some of it’s done in some of the big labs. It’s all in the archive within a day. It’s all out there. There are no papers held back. Right. And it’s all pretty simple stuff. You know, this is not super advanced you know, mathematics, it’s pretty easy stuff. And you go all around the world, which I do, you know, and, and the, the 20 five-year-olds there, I’ve read all those books.

And they know it just as well as any researcher inside of Google, I can guarantee you. Right. So what else does Google have? But they have large numbers of computers, so they can do some kind of show off things like AlphaGo or something, which others may be can’t do as easily. But I can get tons of computers just by paying Amazon a little bit and using the cloud.

Right. It’s not, you know, and that’s true in many countries. So I really D currently, I mean, this may change, but there, there are not such, there are no, no, there are no conceptual advantages held by the companies. And then there might be market position advantage, but that’s classical, you know, standard oil had that.

And then it had to be, as we think about an economic. Yeah, I

Leslie Daigle: think my point was that not that these were necessarily valid positions and what you’ve described as really how they aren’t, how there isn’t an intelligence, that’s, that’s scary and, and, and worthy of fear, but that this is where people’s fears come from.

So people who are not in AI, who are not in machine learning, who don’t understand pattern recognition from, you know, today we’re a hundred years ago. It, it does. Potentially stand up a situation where you don’t recognize it. So therefore it’s fearful, it’s something worthy of fear. So the

Michael I. Jordan: policy is, is there’s real.

And I have it too. And like I alluded to it, I think it’s real and meaningful. And, and and certainly I would not claim that women should not have fear of the future and fear of technology. Moreover another element in what you were getting at that I do agree with is transparency issues. So if I go to the bank and money, get alone and the bank says, no, I don’t want, they just have them point to an algorithm and say, the algorithm is that’s, that’s not something, I don’t think that’s about fear.

That’s about the legal system. And that’s about our recourse as human beings and about what we expect and naturally, and should have. And if that’s not, if the algorithm is not doing that well, the album is messed up and that’s a, that’s an engineering and a technical problem to work on. And I personally believe there’s a bit of a bit too much optimism on by a lot of the AI colleagues saying AI just solves all these problems.

It can do better predictions than anybody. Therefore let’s just use it. It’ll only help humanity. I think that’s just dead wrong because humanity needs transparency. They need to understand the decisions they need to have. You know, it all embedded in the legal system and embedded in a social contract and compact.

I think it’s going to take decades to kind of make machine learning and algorithms emerge well with our social compact. I don’t think it’s, I think it’s, it is a development of a whole new branch of engineering to do that. And I don’t like the naive Tay that somehow AI just solves the problems magically.

It is a complicated, useful tool. And you’ve got to include it as a tool and think about it very, very deeply. And I don’t think there’s maybe been a branch of engineering that ever emerged that was quite as complex as this chemical engineering is one I tend to refer to, you know, the idea of doing chemicals at scale was complicated and it was going to have a huge impact that people were excited.

Right. But it didn’t have humans quite as much in the mix as this. So this is

Alexa Raad: complicated. So speaking of branches of engineering in reading your bio and doing some research about you, it struck me that you are the Michael Jordan of ML and I, you know, pun intended. You are the foremost authority in the world on ML you have in your writings, you’ve said that We have a major challenge in our hands, bringing together computers and humans in a way that can enhance the human life.

And you’ve argued that this really is a new branch of engineering that builds on various other branches and disciplines like statistics like social sciences, economics, humanities, et cetera. And you’ve argued that this should be taught as a new way, a new engineering concept in schools. Can you talk a little bit about why, you know, statistics and computing and data science all makes sense?

Why your disciplines like economics and humanities sort of in the, in the mix and the consideration and what kinds of conceptual flaws. Would we be subject to, if we ignored the importance of say economics or psychology?

Michael I. Jordan: Okay. Yeah. Thanks. That’s a fantastic question. That’s the one I hope will kind of continue to be asked for the coming decades because that really gets at the heart of this.

You know first of all the word engineering is kind of been deprecated or, or it’s, it’s it’s a crude, a certain bad smell. If you say. Social science. That sounds great. You know, if you say social engineering, that sounds terrible. You say genome science, that sounds wonderful. Genome engineering.

It has even like mathematics, like it’s, you know, mathematics stands all by itself, but the people talk about the mathematical sciences. That sounds great. Bioengineering engineering doesn’t sound any good. I could keep on going so now. Okay. Maybe there’s something to that, but you know, Think about how much engineering has done for human beings, you know, arguably more than anything in the sciences, like civil engineering gave us our bridges and our buildings and, you know, and, and KIPP was a chemical engineering you know, gave us all the materials.

We have included all the drugs and medicines that we have and and electrical engineering, you know, it’s just, you know, obviously. In terms of human, happiness you know, engineering has arguably contributed more than anything else. Right? So, so how come the term has gotten w I think it’s partly just because there’s these externalities, these other things that happen that people didn’t think about it.

And so that’s why I’m very eager. I think we are developing engineering here. It’s real world systems that you deploy with compute. And they do things in the real world, like, you know, healthcare and, you know, transportation and financial kind of things. And they are supposed to bring value to human beings.

They’re supposed to work and they’re supposed to be robust and, and all that. There’s scientific understanding behind it, just like there was for chemical engineering and electronic currency, but it’s really about the system it’s really about working and doing good things for good for people. And so we’ve got to think through all the implications that was it really going to work, is it really going to have good plays?

And that those are, that’s an engineering style of thinking. And so I want to bring in younger students and, and, and teach them that they can think through these things themselves. And they often, some of them want to just do pure science and write down, you know, laws, but some of them want to change things now.

They want to build better structures, you know, for healthcare, for social justice, for whatever they want, they have their own problems and they got to think them through in an engineering kind of way. Yeah. Right. And so part of when you have humans in the mix engineering kind of way has to have economics and humanities in it, because it’s not just about data, it’s about values.

It’s about what you want, what does humans want, but what does group would have groups of humans want? How do we make sure that people are protected? That’s really

Alexa Raad: critical. I mean, this is, this is the argument that’s really come into into discussion lately from a number of different angles. And I was very interested when I read that this is what you are advocating.

And this is sort of the why you’re asking, not just the, what, you know, what can we make, how can we make it, but sort of the why to what end and what are the values that underlie it.

Michael I. Jordan: That’s right. I, and the, those I don’t have the values that I want to impose on the systems nor do I think Zuckerberg does, or, you know, Right.

The Biden, I think it has to emerge from down below. People should be able to express their values and have them be recognized and then brought in and respected as part of the systems. And, you know, to a large extent, that’s what economists try to do. Right. And of course their field is still in development.

And I think it’s going to change greatly in the next a hundred years. It’s going to have to be a whole new kind of economics that has data. You know, these more granular level interactions and so on. I think a lot of them recognize that. But that, that person, so I got interested in this partly just by thinking about congestion and, you know, if machine learning systems are doing decision making, like I trying to help you to figure out the fastest path to the airport, you know, it’s a dumb, obvious thing that a machine learning system can help you do.

It’s got lots of data, you know, it finds that. And if it’s just working for me, then it’s going to do a great job. But if it’s working for everybody, It may send everybody down the same road. Cause it looks like the fastest route. Now we’ve created congestion and it’s no longer the fastest road. All right.

So what do you do at that point? Well, if you’re kind of a classical it person like Facebook kind of style, I think you would try to say, well, we know a lot about these humans. We know their browsing history. We’ve collected lots of data about them. We know who wants to go on what road now, this is an exaggeration, but that’s kind of the spirit.

Is that we how we personalize, we know that you Alexa want to go down that road because we just know you were pretty well. And we know that Leslie, you want to go down that road and we figured out how to balance the traffic and make sure it all flows. Right. And I hope you kind of see the ridiculous statistics.

The smarter thing to do would be to create an economy, a market where you, Alexa, kind of told her that moment in some way that there’s possible congestion, that if you want to save your money, you could go on a slightly slower route and, you know, save the money for some other day. And you, Leslie may be in a big hurry today and you may want to spend your extra money.

When I say money. I mean, you know, some currency. And when I say, you know, decide, I mean, there might be a little, you know, agent working on your behalf to try to help this whole kind of, you know,

Alexa Raad: really for self-driving cars. This is a, this is a real issue for self-driving cars.

Michael I. Jordan: Yeah, well, it’s, it is certainly about self-driving cars, but it’s just about cars.

It’s about, it’s about moving bodies from one place to another. And, and again, it’s about the values you have, you know, what are you and it’s value. It may, this is a dumb example of value. How much of a hurry? I mean, right.

Alexa Raad: Yeah, but that’s an economic

Michael I. Jordan: value. It’s an economic value. It matters to me, my children, or I’ve got to go to the hospital.

It matters to me a lot in that moment, perhaps. And I need, and I don’t even know that before the moment occurs, so it could have been burned in the data, right. It couldn’t be just a learning system. It’s gotta be thought through in the moment that I’m plumped into a situation. And that’s what economics is about kind of in the market in the moment things occur.

And then you react and the intelligence of the market makes something somehow ideally better. And, and that style of thinking has just not been present in machine learning or certainly AI. And that to me is a, is a huge defect of the whole.

Leslie Daigle: And, and is that related to the notion of needing to be able to bring different systems of machine learning together, to interact with each other?

Because I mean, the self-driving cars and example of there are multiple different specialized systems analyzing a lot of data at same time. How do you bring that together in a coordinated fashion?

Michael I. Jordan: Yeah, it’s partly related to that. I mean, that’s just kind of classical engineering that systems have components and you got to bring them together.

So it all kind of works well, but you know, economics is also about scarcity and conflict. Right. That there’s not enough road space to go down and there’s our preference. And it’s also about building connections between producers and consumers. And so the other kind of thing that helped me to think this through a bit, other than just things like traffic or whatever is, think about domains like music or the arts, or even journalists.

Right there a producer and there’s a consumer, right. And you really want to set up a link between them that the kinds of producers producing for certain set of consumers and the consumers know how to find that producer. And because something valuable is passing between them say a song is being listened to, and it’s a song that someone loves that’s value should be reflected in some sense, economically.

So I like to thinking about like a system that does, doesn’t just stream music to people and then advertises to make money for the people doing the streaming, rather a system that reveals to the artists. Here’s the people that are listening to me and lets me reach out and say, Hey, I’ll come play your wedding.

Or, Hey, do you like this song? And does that at scale and creates a whole market of producer, consumer relationships, using all of our technology, our machine learning and our data and all that. And the platform disappears at that point. It doesn’t have to do advertising. It just makes connections. And some money starts to flow because it’s real value.

Just like I was listening to Amazon earlier, you know, sends packages from people and say, I’ll pay for it. There’s real money there. And then you can just, you know, the, the musician gets most of that money perhaps. And the, the, the platform takes 5% instead of taking all of the money is what’s currently happening.

That starts to say, okay, how can I build a system like that? Yeah. Is it enough just to analyze the data and put it out there, you know, and know the, you know, on the cloud, no, I got to have economics principles. Otherwise this is going to be gained. It’s not going to work, you know, and, and so on. And so I’ve got to go to my economics books or my colleagues and say, how do we build a system that has got, you know, principals are, and they’re all learning from each other.

And as Leslie alluded to, there’s got to now be cooperation. Like, you know, if we’re trying to learn about another example is like recommendations for restaurants, right? I can’t experience all restaurants and decide which ones I prefer. But you experienced, I experienced that and we start to recommend to each other.

And then the restaurants are also trying to connect up and, and build platforms that actually then allow that kind of information to be shared. That’s economics meets machine learning.

Leslie Daigle: That’s really excellent. And, and while we’ve covered a lot of ground, I feel like there’s so much more that we could delve into, but sadly we’re, we’re just about out of time.

So I’ll, I’ll wrap up by asking you what are three key things that you think we need. We, the general populous need to keep in mind. As we face this brave new world with more machine learning.

Michael I. Jordan: Ooh. Okay. That’s a hard, that’s a great hard question. I think that we’re living in a, in a really tough time and we’re all kind of trying to ignore the fact that there’s so much misinformation and there’s so much distrust and it’s really the worst of times in some ways the pandemic sort of amplified it, but in some ways, almost not.

It made us retreat into our homes for awhile. But, but that, that, that fact is serious. And I think computer science has a role. It has a, had a role to play in that. So I I think the general has got to kind of push back on that the, you know, ask technologists to do better. It’s got to be, you know, willing to insist that it not be this way, not just wait for it to happen.

You know, education is still critical. I think that we’ve, you know, we nearly had fascism in the United States in the last two years just to bring up something kind of, and I think there’s two ways that we escape fascism and it’s not over done deal yet. Number one was the rule of law did prevail.

The judges make good decisions. And those judges that often were Republican, you know, judges, but they still had the rule of law and their brains and why? Well, they were educated in the American system, including the universities and their parties. That was one part of it. The other part of it is that we are a diverse country.

And I think that, you know, the the the, the black people in the south, just to really be clear about it really rose up and really said, you know, we’re participating and we have a voice and we’re going to make these things happen. And I think that helped us escape fascism, that they were, they were smart enough to see fascism and realize they didn’t want.

Right. And so culture and, and rule of law is something the United States has done pretty well. And so you know those were endangered. And so I think that people have to focus on those strengths and ask, you know, technology, not just to bring us miracles and, you know, and expect for vast wealth and all that.

But rather than it merges well with culture and it merges well with rule of law, because that’s the way we’re going to continue to build a decent society going. So hopefully it was helpful. They’ll give you three things, but hopefully that was helpful.

Previous post
Next post

Comments are closed.