Envisioning AI for Good

 

 

Longitude Sound Bytes
Ep 79: Envisioning AI for Good – with Brett Phaneuf (1) (Listen)

 

Tony Zhou
At the intersection of ideas and action, this is Longitude Sound Bytes, where we bring innovative insights from around the world directly to you.

Hello listeners! I’m Tony Zhou, a Longitude Fellow from Yale University. Today’s episode features highlights from a conversation I shared with Brett Phaneuf.

A serial entrepreneur who wears multiple hats, Brett is the Co-Director of the Mayflower Autonomous Ship 400 project, and the founder and chief executive of the Submergence Group LLC (in the US), and M Subs Ltd (in the UK). In addition, he is 1 of 3 founding board members of ProMare, a non-profit (501c3) organization that advocates for marine exploration around the globe.

In our first episode of the Imagination series, Brett and I discuss the inspiration for the Mayflower 400, his career path, and the ethical dilemma in data and AI governance. We start our conversation with Brett sharing the inception of the Mayflower 400.

.

Brett Phaneuf
So Mayflower 400, what we call Mayflower Autonomous Ship or Mayflower 400, depending on who you are, or the MAS 400. My role in that is that I was sort of the chief instigator of nonsense and stupidity. I had the idea. I was in a meeting with folks here in Plymouth, UK, the city government, in 2016. And one of the things we were chit-chatting in the margins about was the coming 400th anniversary in 2020 how the city wanted to do something big, and then there was a proposal kicking around about building a replica. And I wasn’t particularly enthused, because first of all, there is a replica of the original Mayflower, the Mayflower 2, and that’s in Plymouth, Massachusetts. It was built in the UK, and sailed across in like ‘57 and was given to the people of the United States, so I said look, that’s been done. And also, you know, what do you get out of that? You get a 17th century ship. We shouldn’t be thinking about that. What we really should focus on is what the next 400 years of the maritime enterprise look like, right? And why is it important, and how do we speak to it? And maybe put ourselves in the mindset of, you know, it’s impossible to do but you could try at least, is over four hundred years from now when people look back at this moment, what is the thing we would do now that those people in the future would find inspirational? For me, that’s an autonomous vessel.

And it’s sort of an outgrowth of other work I do in the defense sector, building manned underwater vehicles and unmanned underwater vehicles for 25 years, and oceanographic and climatological research in grad school. It’s that, and also I grew up not far from Plymouth, Massachusetts, though now I live in Plymouth, England. I’ve got technological and oceanographic and historical facets to my personality, and I was already deeply interested in autonomy and AI. Even more from a philosophical perspective, how do we know what we know? Why are we alive, not epistemology, but more ontology? It’s sort of the roots of consciousness and how organic consciousness differs from so profoundly machine-based systems to analytical systems, whether that kind of general AI is possible. And those are kind of interesting philosophical arguments. In my business life, I’m more of an applied research guy. I had already gone down the path of practical applications, let’s build things that live out in the world that are autonomous and learn from them in a very practical way through meaningful interaction over time. So we decided to build the Mayflower Autonomous Ship in 2016. And we really focused the first four or five years of our work on building infrastructure to collect data.

So we said, you know, the AI is the hard part. We’ve got to start thinking about that. We have to have lots of data that we have to make into models. So we started setting up infrastructure, all around Plymouth Sound, at sea, on offshore structures, on small boats, vessels, collecting copious amounts of data, tagging data, and then being able to auto-label data, building engines to auto-label data based on our initial model structures. And we did that over… Well, it’s ongoing now, we are in year six of it.

At the same time we started talking about this vision of an autonomous ship, and how it would be useful, and that it’s sort of an end to end all-encompassing, multifaceted research program. So there’s the AI side of it, the machine learning side of it, the edge computing side of it. There’s the space side of it, where we have the space tech and communications and distribution of data through space-based assets, and tracking of the vehicle through space-based assets, and cooperative research with space-based assets looking at ocean health, meteorology, climatology. And then there’s the sort of oceanographic and meteorological component of the vessel. What’s in the water? What’s under the water? What’s the temperature? What’s the conductivity? What’s the fluorometry, the chlorophyll, the planktonic content, are there micro plastics? What’s the chemical composition? Where are their cetaceans and pinnipeds, and marine mammals, all these kinds of things that you want to find out, and then some meteorological data with two weather stations on it.

It took many years of talking to people and giving presentations about it, and through my nonprofit that was set up about 20 years ago, to get some base funding. And we did some crowd funding. And then IBM saw it and thought, hey, that’s super cool. So we stand on their technology platform, but they’ve helped us build our tools and systems right away, so we could practically deploy it.

And now just in the past three months, when we had our initial attempt to cross and it failed, and we brought it back, we’ve done refits now. We took the opportunity to change out the edge compute devices, and we’ve quadrupled the compute power on the edge in three months. Right. That kind of stuff is mind boggling.

Tony
Right. Right, right.

Brett
And so now we have lots of capability. And it’s all about how you apply that effectively to do a task. So it’s really kind of multifaceted. It’s interesting, then there are societal and philosophical questions about, what are people doing? Do we want robotic systems? And how do people interact with them? How do they interact with manned vessels? Do we need more of these? Do we need more man vessels? Is the AI that we’re building to navigate, is it something we want to put on man vessels to help people be safer? What does port infrastructure look like? What does international regulation look like for this kind of thing? Because it’s new.

Tony
All these questions that you’re still completing.

Brett
Yeah, we’ve got to figure it out. And so my answer was, while you want to push on all the soft spots, build something, right? Build a thumbtack that people step on and they’ll do something about it. And so we built a really lovely thumbtack. And it’s forcing people to engage, you know, the US Coast Guard, unbelievably forward, meaning being helpful. They said, well, we don’t have any regulations on this, so we’re going to take this opportunity to help figure out what they should be. The UK Coast Guard took a… there are no regulations on this, so you can’t do it and we’re gonna punish you if you try. But they are learning. They are coming along.

Tony
There’s a lot of ethical talk now and regulations for innovation with AI, and this is one of the prime examples. It’s really fascinating to just see it.

Brett
But what is ethical AI though? You know, I think about this a lot. I mean, it depends on what your perspective on AI is. I look at it this way, which is, it’s not artificial intelligence, because it’s not truly intelligent. It’s augmented intelligence. And so instead of thinking about it in terms of how it displaces people, you should think about it in terms of how it helps us be better people, right? It helps us be better people. And there’s myriad ways where this is true. So just as you would not want traffic in a major city managed by a person standing on a podium in the middle of an intersection waving their arms or manually changing colored placards. Traffic management’s a great example of incredibly sophisticated, ubiquitous automation and engineering that we live with all the time. Don’t even think about it, doesn’t bother you, you’re not thinking about any of that when you’re driving around the city. You look at the evolution of smart sensors in cars that do emergency braking, and lane keeping, and driver warning, and driver assistance. That’s a great example of an emerging technology that will become more ubiquitous and sophisticated, that helps us be better at all these things we want to do every day.

And we should use machine learning and AI systems to help us take those almost impenetrable masses of data that are beyond human comprehension temporarily, right, and reduce them to actionable information that can be integrated into the total corpus of knowledge about how our planet works. It’s using these technologies that helps us liberate a part of our intellect, which is so much more important than the ability to hold in your mind volumes of data. And that’s the insight, which is something computers don’t do. These are augmented intelligence.

So what are the jobs that people should be training for?

People are graduating university, I deal with this every day, trying to hire people graduating university every day, who their entire life were taught how to plug things into a program that will output an answer, but they don’t understand why the thing is as it is. All those formulas have been derived before, and so they get an answer. You know, one of my 60-year old engineers worked in the machine shop and worked in a foundry, and will look at the answer that comes up and goes, wait a minute that can’t be right, because I know that type of steel shouldn’t yield there, for example.

But we get a lot of people who come out with engineering degrees, advanced degrees, and they’re like, well, the answer is this. It is like, yes, but it’s clearly wrong. You have to go back and look at… because there’s no real understanding of even basic first principles.

Tony
This is super interesting.

Brett
It’s a real problem in society. And that’s actually the ethical dilemma with AI. That is the actual ethical dilemma that no one wants to talk about. It obviates the need for people to actually know things.

Tony
Yeah. I think that the need right now because, you know, in curriculums there are a lot of, let’s say like online things, boot camps or even national institutions that are popping up these degrees. You leave with people trained as data scientists, or machine learning engineers, and they learn how to plug and chug models in but they don’t understand the inner workings of the model so that they can question and understand.

Brett
Well, I’ll give you a great example. We built a submarine for a client, but for a military client. A very large defense corporation that considers itself beyond reproach was involved in analyzing the hypothetical performance of the vehicle. They had done a computational fluid dynamic and hydrodynamic analysis based on a model of the vehicle they created from our drawings, and told us that the turning radius was almost a mile, like 1.6 kilometers at flank speed. And they’re presenting this to the government and to us and how this was problematic. And, you know, obviously, we’re going to have to go back and do redesign. And I said, but it’s wrong. They said no, we’ve run this several times now and this is the best we could come up with. And I said, yeah, but I drive the submarine. The turning radius is like 100 meters.  I think you I think you have an extra zero, I think you’re off by an order of magnitude. And they said, no, that can’t be right. The model says this. I said, yeah, but are we really going to debate whether your model is more right than reality? Because reality wins every time. And what you should be saying is, that’s weird. We need to go back and figure out why there’s a flaw in the assumption that underpins this model. Because clearly, reality is right. They couldn’t say they were wrong. Because they probably spent half a million dollars making this model. And so this is a problem, not just from the education system, but all the way through industry now. I think that the real ethical dilemma in AI is not, is it going to displace people from work?

I think what displaces people from work is the fact that they don’t bother to educate themselves. And generally, we don’t bother to properly educate them in the things that truly matter. And we are wildly distracted as human beings. So the ethical dilemma will be; Is AI exacerbating that problem because it does too much for us? Or does AI actually help us be more insightful and creative by eliminating certain elements of our existence that we don’t need to devote as much thought to? And is probably the answer is probably both.

And I would say that that is no different than the emergence of any major technology over the past couple of centuries.

Tony
One of the questions that I had immediately as you were speaking is that your career, the way that you’ve spoken about it, the trajectory, it didn’t involve AI at first, right? It was just things that you were thinking about, you were doing things that you were also interested in. Yet at the same time, in speaking with you, you speak with such depth, and sort of philosophically about AI. Would you mind sharing how you put all your interests and curiosities and passions into one, to focus it?

Brett
I wouldn’t say I’m focused, I’d say that I’m like my own. So the really bizarre thing about all this is, I’m actually an anthropologist by training. I started off in physics. I ended up leaving university, going into the military, came out, studied classics and classical history, archaeology and anthropology. And that’s what my degree is in. I went back for a master’s in nautical and marine archaeology, and then moved to geophysics in oceanography and worked towards a PhD there, but never finished it, and ended up starting a research company, and kind of went out into various bizarre trajectories in the nonprofit world. And then in the defense contracting, and offshore oil and gas, and subsea research, and all sorts of things that afforded me the opportunity to do things that are profoundly expensive, that I could never afford to do, but try to learn new things about my environment. And along the way, having a background in anthropology was really, really useful in dealing with the culture of the military and business, and oil and gas and different kinds of places and things and people and little subcultures that have their own dynamic. I’m an anthropologist, so probably why I talk about things in a philosophical way is because I’m interested in people. I love technology, I’m really interested in that. So more and more of my interest in AI is around these really strange, almost slightly off, sort arguments about whether or not something is ethical, and they all feel like they missed the point. Like they’re striking a target slightly off center of the actual ethical argument, right?

Tony
Like you connect the dots, but the your conclusion ends up…

Brett
Yeah, you’re kind of hitting the second ring out, right? You’re not in the bull’s eye. And so I think I’m in the bullseye. Maybe that’s narcissistic, but I sort of see it that way. But it’s- I read a lot. And I’ve had the pleasure and the privilege in my life to be able to read as part of my education and my job, sort of a really broad range of historical and philosophical and archaeological and technological and political and fiction, you know what I mean? I still do, and lately, all that reading, along with the work I do to put food on the table, and the Mayflower project, just had dredged up this interest in all the little disparate parts of my life that sort of- now I say, oh, well, I can see how I got here now, because I’m looking back.

.

Tony
The Longitudes of Imagination series serves as an opportunity for its listeners to learn how professionals in different sectors approach imagination, and how ideas turn into action for the good of humanity.

As the Mayflower 400 project continues to grow, Brett and the teams at Promare and IBM will need to continually address both technical and social concerns of artificial intelligence and its use cases. From building efficient data pipelines to an autonomous ship that can survive whatever it may encounter at sea. Brett and the Mayflower 400 team will not only be involved in improving the ship’s capabilities, but also have active voices in shaping new regulations and policies in autonomous AI and ocean research.

When Brett credited his imagination and thoughts on AI towards its non-traditional and diverse education, it really resonated with me as I often find many parallels between learning data science and classical music. It is inspiring to see how Brett, who is an anthropologist by training draw from his studies in physics, classical history, archeology and oceanography for the Mayflower 400. After our conversation, I agree that the domain knowledge is perhaps the most important skill needed to contribute towards a project of this size and scope.

We hope you enjoyed today’s segment. Please feel free to share your thoughts over social media and in the comments, or write to us at podcast@longitude.site. We would love to hear from you.

Join us next time for more unique insights on Longitude Sound Bytes.