Omar Hatamleh on cross-innovation, using AI to identify opportunities across industries, and earning customer trust
Omar Hatamleh will be flying to the Gold Coast to keynote at ADAPT’s Connected Cloud and DC Edge in March. He shared with ADAPT’s Director of Strategic Research his learned experience in cross-innovation, using unsupervised AI to identify the opportunities humans are missing across industries, and earning the trust of your customers in a world of complex cybersecurity threats.
Dr. Omar Hatamleh was the Chief Innovation Officer, Engineering at NASA Johnson Space Center and Associate Chief Scientist at NASA ARC. He has since assumed the role of Executive Director of the Space Studies Program.
We talk a lot about how we can collaborate successfully within our organisation, across organisations and so on. You ran the somewhat acclaimed cross-industry innovation summit for NASA over the last four years. We spoke about this when you were in Sydney last time as well, the importance of working across the industry, sometimes within competing organisations as well. What were the main conclusions from that cross-innovation summit? But then also how can we accelerate innovation, technology exchanges and learning between companies? I think you use the example of NASA in space for example.
Exactly. So that’s an example of how to diversify everything we’re doing. Then about a few years ago I did some strategic partnerships on non-traditional sectors, which are sectors outside of the outside of airspace. Then I noticed when I was looking at, for example, the oil and gas industry, there was so much in common between us. For example, we both work in the modern environment, in harsh environments, sometimes inaccessible locations. The technology at the end-use is probably going to be different. But the technology to get what we need like simulations, extreme temperatures, robotics, safety, culture, risk assessments, all these things are very, very common. So I started thinking, the more I expanded, the more I saw similarities across different industries.
Then I thought “Why don’t we just have a meeting where we actually bring people as diverse as possible?”
It’s interesting because in this meeting, for example, we had the Chief Innovation Officer for L’Oreal, we had the VP of Uber, Chief Innovation of Google. We had the chef with the most expensive restaurant in the world. We had the CEO of United Healthcare.”
So, people are 100% diverse, completely different. What we’re interested in identifying is how do they harvest innovation? How do they create new ideas, new concepts?
When they have roadblocks, how do they deal with them? Can we learn from each other? Can we learn from the successes? Can we learn from the failures? Can we work together in a team to tackle some of these common challenges between the different people across the line? The case was ’Yes’ to all of these questions. I mean it was very valuable that actually we kept running it for several years because it was a lot of value for everybody involved in these kinds of aspects. So, working across different industries I think is definitely a key now and in terms of making success and getting things faster.
Yeah and that’s certainly something which I’m seeing in the local marketplace as well. I did some round tables last year around security as it relates to the Cloud for example, and even having competing organisations in that room, it could have been National Australia Bank, ANZ Bank, these kinds of guys, but they really see the value. I think it is working together for a common outcome particularly when you think about innovation and then when you think about areas such as security as well because there really is no competitive advantage to be had by trying to compete on security at the end of the day. That’s a really good point.
The next thing is we think about collaboration in general and I think some of the emerging technologies and ways of doing things. There’s some school of thought when looking at something like AI for example, that it can actually dehumanise things to some degree, but then there’s another school of thought where AI can actually be used, I think, to help drive collaboration and interaction between people. Recent advances in data science are expanding the potential of AI, and machine learning, but many organisations that are really being held back by poor data quality, the capabilities to manage that government data.
Do you think 2020 will be the year of more widespread AI? But also in terms of how AI can actually help people work more effectively, collaborate more together and maybe take away some of the fear factors that go with it as well?
That’s a really good point and that I absolutely agree with you in terms of that whole focus around security and cyber security specifically. But what about trust? I think that there’s a lot of talk around, we secure the perimeter internally. But when we think about AI, machine learning and data, I think clearly that the whole issue of trust in terms of how trusted is a source of data and the people who are keeping the data and so on. What’s your perspective on that?
Yeah, I don’t think Matt, 2020 is any special. It is just that we’ve been making a lot of progress in artificial intelligence. I would continue to make more progress actually because we’re making much more advances. The computer is getting faster and we’re getting better results and the algorithms are getting more mature.
We’re moving from artificial intelligence, to give you an idea as it leverages basically on big data and algorithms and faster computers. Then the algorithms themselves, we have supervisors provide may be enforced learning.”
Just to give you an example, supervised and unsupervised algorithms. So the way to let the machines learn is by exposing them to data. So supervised algorithms look, for example, at hundreds of thousands or millions of pictures of dogs and cats. Then the computer eventually will be able to tell you if this picture is a cat or a dog.
The unsupervised, it looks into all these pictures, but doesn’t have an outcome. It doesn’t necessarily tell you if this is the cat or a dog. For example, it’s night time or daytime, it’s inside, it’s outside. It looks at it completely different than any human will look at it. So all these characteristics I think will be essential in giving us insight and connecting dots and looking at things that a human would have been very difficult to look at.
If you look at that, then you can apply them to absolutely any industry, to do engineering, to legal industries to finance, they can be applied to absolutely anywhere. Then the more we move forward, more companies, corporations, academia and government will keep implementing these technologies. Initially, it’s a challenge because until you adapt it, there’s obviously some cost associated with that learning, training and also, of course, there are bad things as well.You need to be watching for bias. Bias, for example, do we need to regulate these things? Otherwise, they’re going to go out of our control so there are a lot of challenges you need to be working with, right?
Privacy is another thing of security. Cyber security is going to be the foundation. Without proper cyber security and we’re not going to go anywhere. So it’s becoming ambiguous and the technologies affecting almost every single sector of every single aspect of our life has to do with it, to be looking at it that way. The more we advance, the more it’s going to be intrusive into different things. It’s going to create value and at the same time challenges and we need to start anticipating what these challenges are, and between the lawmakers, corporations and everybody as a community need to start tackling these issues and make sure we have regulations, everything is looked at in an adequate way.
The problem depends on who is creating the machine? Who is creating the algorithm? Who is creating the system? So typically it tends to take the shape of the corporation, the government, academia or the entity that’s creating that thing. So are these black boxes going to be open for people to look at and actually be able to identify what’s happening and is it safe or not? That’s going to be the question that we are going to be looking at.
Look, for example, all the social media now, they’re basically collecting tens of thousands of parameters on humans, on people. Being used for certain things that most of the time is without consent or is a very vague articulation of the rules of engagement. So that’s where it started to have; people actually have issues with trusting these kinds of things. I think it goes back to that one.
If we have a solid, robust way of regulating these black boxes and making sure the platforms are audited and they’re not malicious or contain code that actually fits in the context, I think then people will start feeling more happy and comfortable in dealing with equipment and these systems.”
Omar Hatamleh is the Executive Director of the Space Studies Program at International Space University. He has led four annual Cross-Industry Innovation Summits in Houston. He was the Chief Innovation Officer, Engineering at NASA. In March, Omar will keynote at ADAPT’s Connected Cloud & DC Edge on NASA innovation lessons.