#ItzOnWealthTech Ep 21: Zen and the Art of Artificial Intelligence
“It has nothing to do with how smart you are, what school you went to, or how many PhDs you have in your innovation lab. If your team can’t emotionally deal with the prospect of failing and having to wear egg on your face for five minutes, it’s going to be difficult.”
— Davyde Wachell, Responsive
Davyde Wachell is the CEO of Responsive, a hybrid wealth-focused startup. Backed by plug and play ventures, Responsive helps wealth managers in upgrading advisor productivity and decisions with next best actions driven by predictive analytics. Davyde studied AI in the Symbolic Systems program at Stanford and has worked in wealthtech for over 15 years, having built everything from quant research platforms to compliance automation tools. His film and opera work have been seen at Tribeca, Sundance and the Hammer Museum in Los Angeles. He lives in Vancouver his partner Holly, who works for Sanctuary AI, a humanoid robotics company.
Now hit the Play button!
This episode of Wealth Management Today is brought to you by Ezra Group Consulting. If your firm is evaluating new technology or looking to improve your current wealth platform, you need to contact Ezra Group. Don’t spend another day using technology that doesn’t offer an elegant user experience. Your advisors and clients deserve better and you can deliver it to them with the help of Ezra Group.
Topics Covered in this Episode
- An overview of Responsive [04:19]
- What kind of data is needed in order to determine what the next best action is for a specific advisor [05:16]
- How we can solve the problem of fragmented data in wealth management [07:07]
- What is a wealth persona [08:33]
- Why AI is stupid [10:29]
- What is cognitive architecture [15:26]
- How fintech is becoming more like sci-fi [18:55]
- Do we need judges and courts for AI? [21:54]
- Discussion on systems of intelligence and how they differ from systems of record [22:58]
- How a system of intelligence can help a firm be more proactive instead of reactive [26:11]
- What is the innovation event horizon, and why doesn’t anyone really know anything about it [41:14]
- Zen aesthetics and software design [45:56]
- What’s scaring Davyde about AI [51:05]
- How Davyde sees AI impacting the future of work, and specifically the future of work around wealth managers [54:49]
Companies & People Mentioned:
- Alpha [32:12]
- Andreessen Horowitz [34:52]
- Chase [37:28]
- Fujitsu Global [44:46]
- Invest in Others [30:32]
- Microsoft [23:37]
- Responsive AI [03:06]
- Salesforce [06:08]
- Wells Fargo Advisors [13:29]
If you are interested in more information about some of the topics Davyde and I discussed, these blog posts would be useful:
- A Consultant’s View on the Leading Vendors in AI for Wealth Management
- 6 Ways AI is Helping Build Consumers’ Confidence in Banking
- 9 Questions on Artificial Intelligence for Wealth Management
Complete Episode Transcript:
Craig: Today on the Wealth Management Today podcast I am very happy to have Davyde Wachell, the Founder and CEO of Responsive. He’s talking to us today live from the Tuscany region of Italy. Hey Davyde.
Davyde: Hey Craig, thanks for having me on.
Craig: Thanks for taking the time on your vacation overseas to talk on my podcast.
Davyde: Well, what could go better after tiramisu, right?
Craig: Not just any tiramisu, tiramisu made and served in Italy.
Davyde: It’s delicious.
Craig: Enough of the jealousy of your vacation, now I’m going to impose on your vacation time and ask you a whole bunch of questions.
Davyde: I love it, let’s go.
Craig: Cool. So if you can, in two minutes, give us an overview of your company which you are the founder and CEO of, Responsive.
Davyde: Absolutely. We’re a wealth management-focused, B2B software company built around the idea that wealth management can be optimized for everybody: for the end client, for the wealth advisor, the relationship manager, and for the business itself that’s concerned with wealth planning. And underpinning this idea is a core belief that client data can be used to create a better relationship and better results. So if I was giving you the tagline, I’d say that Responsive boosts wealth advisor productivity and decision-making, and we specifically do that with next best actions.
Craig: That’s a great tagline, every advisor wants to boost their productivity. So with next best actions, is that something you feed into other systems? And what kind of data do you need to gather in order to determine what the next best action is for a specific advisor?
Davyde: We view the personal financial data as primal, and we view what happens in time in personal financial data from a behavioral point of view as very primal. We can bring in other data as well into our processes, but the goal is to give the advisor opportunities to better serve their clients, even if it just means checking in. Some of these next best actions are so simple that you could write them on the back of a napkin, other ones are more complex and more subtle. But for us, the core focus is we don’t want to be brain in a vat. We can plug into different systems, we could plug into a Salesforce or whatever. But we do believe that there’s value inside of having a full feedback loop that is used and designed to enable better decision-making. And we view one of the problems with wealth management and all these technologies as fragmented data, fragmented software, therefore fragmented and potentially incoherent service. So we have our own hybrid advisor dashboard that we can build into another system’s dashboard, or we can serve things through API as well. But our first choice is be able to control the cooking and have a full human in the loop system.
Craig: So fragmented data; data is one of the keys to AI being useful, and then having the data and to be able to understand what the data is telling you. So how do we solve the problem of fragmented data in wealth management?
Davyde: The first part of that solution is to back off of language like data lakes and the perfect objective framework for doing any kind of analysis. It’s to think about how do you represent the financial life of a client, in terms that make sense to them, and in terms that make sense to the advisor. So something we believe in and something we do is the construction of what we call the wealth persona. This is a way of not just aggregating and pushing data together in some store, but actually creating a representation and a framework, which makes reasoning on client data both easy from when an advisor looks at it and from a machine learning point of view.
Craig: Another buzzword, I hate these buzzwords.
Davyde: I know, I know. How about I’ll say mathematical science and data, or statistics?
Craig: Some people throw around these AI terms like they’re auditioning for a play or something. When you say wealth persona, can be more specific about what you mean?
Davyde: Yeah, so at the basic level, the kitchen table level and the way a lot of us think about it is we can look at a client in terms of their balance sheet, their aggregate balance sheet, and we can look at a client in terms of their aggregate cashflow. And we can look at these at high-level categories, and then we can also break these down into semantic classes that might be different from the kind of hard accounting terms we use, and may be closer to how people think about how they store and spend money. I’m being a little bit cryptic because part of this is how we think about it.
Craig: Just come out and say it, you can speak freely on the podcast today, Davyde.
Davyde: You can translate accounting language into behavioral language and then you can start to look at, are there things that people do in time on a recurring basis? Are there interactions between things people do in time? And then do all these things that add up to making it easy to predict certain kinds of events, or to identify certain kinds of people? If I had Logan here, he’d say we’re just getting the ingredients ready to cook a good meal, and there’s a way to frame things.
AI Is Stupid
Craig: I understand. So Davyde, tell me why is AI stupid?
Davyde: Because people are stupid.
Davyde: Because they want to solve something before they’ve framed the problem. And what this can lead to is unexpected consequences. So I’ll give you a couple examples, and you’re a computer science guy so you probably know about optimization and back testing and data, and how the road to hell is paved with good back tests, or some version of that adage. We have something like Facebook or YouTube. And these systems aggregate all kinds of end user data, but the one thing they’re doing and they’re optimizing in AI or machine learning or we’ll just say gain boosting to be real about it, is they’re gain boosting on engagement, right? And engagement can be that people enjoy something, engagement can be that it pisses them off and they rage, and so on and so forth. But it seems like the conclusion out there is that this optimization of engagement is actually creating divisiveness in politics and bad cognitive behavior or bad psychology.
Craig: The ends always justify the means; they don’t care.
Davyde: Yeah. So we’re boosting engagement and then this creates a lot of things maybe we don’t like even as a society, right? So if we translate that back into the banking context, we have a lot of people patting themselves on the back about product recommendation tools, targeting, segmentation, and selling all kinds of products. Some of them might be debt or credit cards, based on all kinds of data. So there’s a question of what kind of impact this kind of gain boosting might have on for one, the risk of an organization and for two, the health of its clients. So that’s an open thing to think about. And obviously behind everything is interest rate sensitivity and central bank action, but I think during our next market event there’s going to be a little bit of excitement and disappointment around some of the slam dunks people have been having in gain boosting on recommending banking products, and these kinds of things. And post-game people will realize wow, we recommended this product and we recommended it for this completely diabolical or idiotic reason, would be my expectation.
Craig: That’d be like when Wells Fargo gain boosted the number of accounts they opened by opening up accounts for people without telling them they’re opening up accounts.
Davyde: Yeah, yeah. These are kind of the devils of automation. Forget AI, forget machine learning and advanced space chat – automation can create all kinds of devils. I think right now we’re at peak confidence and peak slam dunk mode, and in the coming years there’s going to be a lot of, I can’t believe we actually let that happen.
Craig: Looking back on things, they’ll say that.
Davyde: Yeah. It’s going to be a similar situation with a lot of the application of AI. I think if you apply AI to a million people, you have a burden of responsibility to think about what the impact of that might be, right? If you’re designing it.
Craig: Right back at you Facebook! So AI is stupid. When you say AI is stupid, is it because the people who are developing the AI are stupid, or is it because the people who are using the AI aren’t framing the problem they want to solve with it properly?
Davyde: We can spin this different ways. One is AI is stupid, and will do exactly what you set it up to do. It’s not a magical fairy that solves your problems; the cognitive architectures we have right now are very limited, and they will do what they have been designed and set up to do to a troubling fault.
Davyde: Cognitive architecture is a word that smart people are using now because the word AI has become so beaten up that it doesn’t mean anything anymore. In the old days, what AI used to mean was that you would design a system to solve some kind of problem. You’d represent the domain and you’d represent ways of thinking or planning, not even from a machine learning or deep learning point of view, but even from an algorithm point of view. You’d frame a problem intelligently, so that a machine mind could proceed through the problem space and solve it. This is like good old fashioned AI. Now we’ve come a long way since the 60’s and beyond basic algorithms, we do have all these new great breakthroughs. Cognitive architectures can be complex systems that organize how sensory information comes into some kind of mind. How that mind will break it apart, remember some of it, act on some of it, and so on and so forth. So cognitive architectures are like the next generation of going beyond the basic machine learning, deep learning, boosting, optimizing one industrial problem into creating minds that do in fact do mind-like things
Craig: Or making AI more intelligent or more able to do things that are productive versus..
Davyde: I mean if we’re talking about productivity, there’s going to be a lot of productivity that’s done just by applying the sort of machine learning AI we have right now tensor flow to industrial problems that are well-represented.
Craig: We’re going to lose half the audience if you start talking about tensor flow. Would you say the cognitive architecture’s more of training the AI based on natural language, or I’m just telling the AI like it was my assistant how to do something, rather than training it on a whole bunch of data?
Davyde: Yeah, I think the litmus is does it remember things? Does it reason about objects in a world? Does it relate the meaning or behaviors of objects to other objects? So the things we’d start to call mind, and these are the things that people in robotics deal with. And some of the deep learning people have come up with deep learning typologies that start to resemble these kinds of cognitive architectures, like networks that can represent complex, hierarchical game states and play video games, or interact with environments, or transfer learning. But this is like the real meat, and to bring it back to what we were talking about, having cognitive architectures around the problem of wealth management and banking, because people’s lives are complex. There are things that happen in time, and you need to understand that some things are the same as others and some things aren’t. It’s not just as simple as pushing data into a machine and getting a product recommendation; it’s what happened to this person three years ago, what did they say, and what are they probably going to want to do in five years?
Fintech As Science Fiction
Craig: So moving from cognitive architectures, is fintech becoming more like sci-fi, like science fiction? Are we seeing more of those types of things that we predicted many years ago that would be science fiction like, and now they’re coming into the fintech world?
Davyde: Yeah, there’s lots of things that are happening. It’s interesting because banking is one of the oldest new technologies. The creation of banking technology has radically changed the world, from the creation of government debt to the creation of markets, and the creation of collateralized loans. All of these things have had a huge impact on the course of our civilization. And now for the first time, the system itself is going through this huge renovation where the infrastructure around banking is no longer an advantage; just having infrastructure, just having pipes. And when the card is played weakest is when you see a banker say, well we have the regulatory advantage; I think that’s a very weak card to play.
Davyde: If you look at the history of financial transactions and the way they’ve evolved over the last 10,000 years, we could start with Sumerian empires, where they started accounting with little small clay objects. They’d store them in temples and they’d refresh these on the solstice or the equinox as a way of gathering and redistributing food and products. So this is the first spreadsheet, right? The impact that this innovation had on societies was massive. This created the world’s first empires, the world’s first temple societies, all kinds of things. You can fast forward to Britain’s creation of companies, securitizing companies. What if we all chip in our money and we bought an enterprise, and then the enterprise sent ships to India, right? These are huge innovations. I think we’re now sitting at the eve of a similar moment and inflection point, and it has to do with everything being financialized and financial decisions being automated, and I don’t think anybody understands what that means. There’s going to be computers doing financial transactions with each other, probably more than human beings are making decisions about all this stuff.
Craig: The question is, do we need judges and courts for AI? If different programs are making these decisions, could they be responsible for their actions rather than the people who program them?
Davyde: I think certainly that that is what a lot of nations will try to do, at least in the west. I don’t know if that will be the ideology of say countries like China, that don’t necessarily have the same point of view we have about what society should be, or how it should function. But if you look at the regulation coming out of Europe and the way people are thinking about it, yes; they want there to be some human decision maker that’s accountable to these algorithms that can have a massive impact. And so from our point of view, regulation and explainability and governability is more of a product than the decision sides.
Systems of Intelligence
Craig: I’ve heard the term “systems of intelligence” being thrown around a lot lately, mostly by people I don’t think understand what it means. So can you tell us what a system of intelligence is, and how is it different than a system of record?
Davyde: Okay, sure. Let’s start with the system of record, because I think that’s easy to explain and I can go back to my Babylonian temple. So in my Babylonian temple I bring in a bunch of grain, because they tax the people of Mesopotamia, and I record all the grain and cattle that came in on my little clay tablet. That’s the system. Now we have Salesforce or Microsoft or whatever SAP; all this stuff gets written down. So everything that goes on with the business or with the government or with any kind of human activities, it’s getting logged in some ledger so that some person can theoretically look at it later, whether they’re a regulator, a data scientist or whether it’s a piece of customer information that gets rendered on a mobile app.
Davyde: So that’s a system of record. It’s our way of remembering what’s going on with the thing at a point in time, that’s the easiest way of saying it. So a system of intelligence then is the actual sort of organization and process that makes use of that data for some kind of human endeavor. An easy one to point at is Uber, right? So Uber is storing all kinds of information about people who want rides and drivers and where they are and how much they paid and where they’re going, but the system of intelligence is the thing that actually is like, hey, person over here in Fort Green, Brooklyn needs to get to Chelsea, and we’re going to tell them it costs this much, and they’re going to say yes. So all of the application phase, the intelligence around making Uber run, that’s a system of intelligence.
Craig: So would system of intelligence be an overlay on top of the system of record?
Davyde: Yes, the system of record is the memory and the system of intelligence is the decision maker, I think in easiest of terms.
Craig: And is that something you’re building yet? Responsive?
Davyde: Yes, our focus is on that. What we’re focusing on doing is, can we bring more intelligence to the advisor and can we bring more intelligence to the enterprise. These are sort of the two users, and obviously the end client is an important user in our system, but we are targeting an environment where maybe the end client doesn’t even have to look at a screen; they’re just dealing with their advisor.
Craig: Can a system of intelligence help a firm be more proactive instead of reactive?
Davyde: Absolutely. I think there’s a million things that don’t even require all this fancy stuff. Like I said, back of the napkin stuff, getting your house in order, connecting the dots, just showing a full view of a client in a way that haptically makes sense to the advisor. We’ve looked at a couple products in market that compete with us and we looked at ourselves, and we still feel like there’s a long way to go on how to show who a client is to a wealth advisor. And there’s all this legacy thinking around how should we look at a client’s portfolio, ow should we look at their life? Through these old spreadsheets and tables and pie charts and mountain charts. We think there’s a different view of how you want to look at a client. So I’d say that plugs into the system of intelligence side, but if there’s one thing for people to take away it is to think about how do you represent the life of a client to an advisor. How do you do that, what’s the best way to do that? I’m not confident that that has been done at a great level yet. And it’s not as exciting as AI, but I think it’s a fundamental question
Craig: That’s more of a visualization of the client?
Davyde: That’s something we believe in. We believe in visualizing something before applying boosting to it, or applying intelligence to it. You can talk to planners and portfolio managers, and they have the things they like to look at it, right? And the things they like to do. These are industrial processes that were created in maybe the 60’s or 70’s or 80’s, they make people comfortable and they work. But something that I personally believe is is that there is a more woke way of looking at end data, visualizing it, organizing it, and thinking about it that comes before you do any of this stuff we’re used to doing or before you do any AI.
Craig: That sounds like a process that we do; I run a management consulting firm and we tell clients a lot, you’ve got your existing processes, but don’t just buy a new system and replicate your existing processes if they’re not the best way to do things. You built these processes over many years and they may be related to old ways of doing business. You should look at a new way of doing it and then bring in a new system, rather than the other way around.
Davyde: Yeah, exactly. And look, there’s wisdom in the way things were built. So we don’t just throw things out, we don’t just say that’s garbage. But when you think about the way people can interact with things, the way they can look at things now, there’s new opportunities. And especially when you think about how an advisor looks at 100-150 households, maybe they want to look at 300, maybe they want to look at 300 and contexts, right? So how can a person or an organization look at and be aware of 300 people at an advisor level, and how can they be aware of maybe a million people at an organization level, right? So it’s no longer about going through this rogue process of we do this industrial thing and we crank it out; it’s where are you drawing awareness to? How do you draw awareness to a client, into a particular situation or event? And how do you represent that in a way that an advisor can look at and understand very rapidly, and then investigate?
Craig: I want to take a little break from this episode to talk to you about one of my favorite sponsors, the Invest in Others Foundation. Invest in Others is a non-profit, you can find them at investinothers.org. They look to raise money and give out awards to charities that are sponsored by financial advisors, so it’s financial advisor’s favorite charities and charities that they spend a lot of time supporting. Invest in Others looks to get sponsorships from the industry and funnel that money to advisor’s favorite charities. I like this non-profit, I think you should take a look at it. Again, that’s investinothers.org. They have a couple other programs: one is a Grants for Good program, delivering money to different needy organizations and needy groups. They’re also starting a corporate awards program, which is going to be a little bit different but still within the industry and another way for financial services and wealth management corporations to help donate money to people in need. I like Invest in Others, I think you should take a look at them at investinothers.org.
Advisor’s Time Down the Drain
Davyde: I think there’s just so much focus on the portfolio, and I’m a second generation manager and Alpha producer; I love Alpha and I believe in Alpha and I think it’s cool, but you look at a lot of the things that are sold to advisors and it’s like portfolio doo-dads. There’s a lot of software messing around with portfolios. And I believe in research, I believe in portfolio management. But when you’re looking at the division of labor, a lot of time goes into portfolios and messing around, for lack of a better word. So that’s a self-inflicted wound. I think there are two other drains on advisor’s thinking and time. One is just general communications jam up; sometimes the squeakiest wheels are getting the grease when it should be the sauce. There’s not necessarily a way for an advisor to deal with just so much email and crap all the time, it’s mind-blowing to be able to just deal with it the raw communication data. That’s another one, can you find a way to focus on what you need to focus on and what client you need to focus on?
Davyde: So cleaning that channel up, and then the third one is compliance, housekeeping, and data shipping. And so when we look at that, we say that portfolio management should be about rational policy design, and then mapping that policy into a person’s life. And when we look at coms and signals and jam up, it’s about how can we create a tabula rasa that when the time comes, the advisor’s just looking at what is the most important thing, right? I can’t remember who said that, “The most important thing is to not forget the most important thing.” I think that’s a good mantra for wealth advisors who have so much work to do all the time in a million different directions. And then the other thing is cleaning up all the bullshit work, right? And there’s a lot of good work being done there. Not having to carry the water from point A to point B, just having that be a policy. So can we transform repetitive work into policy, and can we get rid of noise?
Craig: You just mentioned repetitive work, and AI is taking over a lot of repetitive work, as any automation tools and software have been doing since software was invented. I listened to another podcast from Andreessen Horowitz and they were talking about how jobs aren’t directly impacted by AI, it’s the tasks that make up the jobs. The jobs are just bundles of tasks and that AI will impact the repetitive tasks, and then allow the company to decide where to deploy the human capital towards the most valuable tasks. So do you see this same thing happening in wealth management?
Davyde: Yeah, I think absolutely. I believe history always produces winners and losers, and I think the winners are going to be those people that figure out what the real value drivers of their relationships are, and where they can apply a lot of leverage to the human capital they have. And then beyond that, how can they convert their workforce for this newer model? We’re super long in human capital and wealth advisors and real people providing wealth management; we’re all in on that thesis. But where the winners and losers are going to be made is, are people going to move to where the market is demanding of them, and are they going to cut the bullshit as quickly as possible and focus on what winning looks like? So yeah, 100% I agree with that thesis. And it’s not just a question of buying a piece of software when it comes across your desk. We talk to people and they’re like, well this is a commodity and that’s a commodity, there’s a million people doing that, and we can do that at any time. And I’m like, well are you doing it yesterday? And the thing about not doing it yesterday is that nobody in your organization is learning and nobody is evolving. In different technology cycles that has probably been acceptable, but I think in this next technology cycle that is unacceptable behavior. And those who are learning fast and trying fast and failing are going to do much better than the people who think they’re going to pick it off the shelf in three years.
Keeping Up With The Technology Cycle
Davyde: Yeah, and that’s so hard for organizations because people are on these quarterly review timelines and their bonus. But at least at an allegoric level, this idea that you’re going to have a slam dunk and if you don’t have a slam dunk you have to choke it out behind the shed, I just disagree with. The people who are going to win in this are going to try a bunch of different things, keep iterating and keep getting bloody noses and black eyes until they get it right, and they’re going to keep learning from the human capital.
Craig: I couldn’t agree with you more, Davyde. I do some speaking at conferences, and one of my slides shows the Google graveyard. I don’t know if you’ve seen the website called Killed by Google? It lists all the apps that Google has either started and then shut down or bought and shut down since they started. There’s 165 apps, software, services, hardware that have been shut down by Google. So you think that’s 165 failures, but then they couldn’t have created the incredible successes they have with Android, Gmail, Chrome, and their other successes if they didn’t go through all those failures.
Davyde: Absolutely. It’s the Ray Dalio thing about pain, right? Pain’s there; it helps you learn, and then you iterate and you improve what you’re doing, and you do better next time. And it’s an emotional thing. It has nothing to do with how smart you are, what school you went to, or how many PhD’s you have in your innovation lab. If your team can’t emotionally deal with the prospect of failing and having to wear egg on your face for five minutes, then it’s going to be difficult. And I think it’s a weird spiritual thing, but I think it’s the most important. I think that’s the cultural difference between startups and incumbents, and that to me is the most important thing. So the incumbents that learn how to do that are going to clean up.
Craig: Howard Marks said that experience is what you gain when you don’t get what you want. So I’m talking about innovation and disruption here a bit, at least that’s what I call it when I’m giving presentations. What is the innovation event horizon, and why doesn’t anyone know anything about it?
Davyde: It’s the point at which no matter or light or even information from that matter can escape a black hole; its why black holes are black holes. There’s this radius out from the gravitational center of the black hole, and once it goes in, nothing comes out. I have this theory that as much as people pay McKinsey and Accenture or Boston Consulting Group, that there is an innovation event horizon. And it’s this point at which people don’t understand what happens beyond that point, and as smart as they want to say they are and as much as they think they know, they don’t understand.
Craig: I like that. And for those people who are really geeky, when they hear of event horizon they’ll think of the movie with Laurence Fishburne and Sam Neill from the 90’s.
Davyde: One of the best genre films ever made – it’s Solaris meets The Shining.
Craig: You were talking about aesthetics and religion a bit, so talk a little bit bit about zen aesthetics and software design.
Davyde: The concept I like the best is the pathos of all things. Like when you look at something long enough and you kind of clear your mind of your own preconceptions of it, what you want it to be, what you don’t want it to be, what somebody said about it, all this stuff around all the things that you’re bringing and you see the thing for itself. And this is really hard to do. It’s hard to do in life, it’s hard to do in software; it’s hard to see the thing as it stands and cleanse yourself of the need to be right, or cleanse yourself of the need to be smart. And so I think in software it’s sort of like a humbleness to look at your user, not the hero you’re going to be by delivering them a piece of software, but the troubles they have and the things that worry them and the problems they have. And I think in finance this is a big problem, right? We all want to be innovation leaders, we all want to be very smart compared to the others, right? But at the end of the day, what we’re dealing with is this infrastructure that everybody lives their life on. And that’s a big deal, right? When we’re building software and we’re building transformation and innovation for the lives of millions or billions of people as you said, we have to come to it with a kind of seriousness and humbleness and dare I say sadness that allows us to do the right thing.
Craig: And how can software represent that software? Should software understand being melancholy?
Davyde: I think it should, yeah. I think we’re often afraid to represent things in our software that aren’t killing it or winning it.
Craig: And you’re not sure, right? Isn’t that the most precious thing about life, is it’s uncertainty?
Davyde: Exactly. And it’s something that we’re all afraid to say, I don’t know or I’m not sure yet. And I think that it’s invaluable, when you’re investigating and when you’re getting through that first layer.
Craig: Isn’t that something like the way machine learning works? You don’t know, you’re uncertain as to what the results are going to be. You’re hoping for some patterns, but you don’t know.
Davyde: Yes. And even if you get the results you want, you should not be confident that the reason that that learning process is doing what it’s doing is because it understands it.
Craig: Right? It’s confirmation bias.
Davyde: Or your modeling coincided things that you think are the reason you’re getting it right.
Craig: One other thing I wanted to mention was things that go bump in the AI night. So what’s scaring you about AI?
Davyde: A lot of us are baked into MBA thinking and all this kind of optimizing and productivity. We keep building these processes to do better and better and better. And what if the processes we build are optimal? That’s the scary part, right? And back to the thing of unintended consequences. What are you optimizing? What are you up to? What do you think you’re optimizing and what are you actually optimizing? And then what’s the impact of that on human systems, social systems, and financial systems?
Craig: Sure. The term cybernetics is Greek, it means governance. So I think that the concept of cybernetics applied to AI means we need better governance over how it’s applied.
Davyde: Yeah. I mean without picking a side, I don’t think in any country right now we have any philosopher kings. Maybe that’s another scary thing about AI, is that it’s complex technically, philosophically, and scientifically, and from an ethics point of view, how many people can actually preside over that? And yet we’re going to put so much decision making power into the hands of these machines and mechanisms in the next couple of years?
Craig: And who’s going to watch over them?
Davyde: Yeah. Or who’s even going to understand them, even if they were watching?
Craig: It’s almost like string theory, only five people in the world understand it.
Davyde: Yeah, but string theory isn’t running the world economy.
Craig: Well AI isn’t running the world economy yet either, but it could.
The Future of Work
Craig: So we’ve covered a lot of ground here, and one of the last things that we had on our list was the future of work in organizations. We touched on it a bit earlier, about how AI will impact tasks, bundles of tasks that make up jobs, but how do you see AI impacting the future of work? And specifically, the future of work around wealth management?
Davyde: This goes back to tying it back to our core mission. I think if we look at the past and the way organizations work, it’s industrial, it’s operative. So this is typified by people dealing with spreadsheets, processing some data, this is what I’m referring to as the financial matrix and then outputting some decisions. So I think as we talked about earlier, a lot of those things are going to be absorbed into automation, and a lot of low-level decision making is going to take care of a lot of this fiddling that people in our industry used to do in the old days. So then it becomes more about the frontline worker being a researcher, a frontline researcher for the organization, to understand the client base. And not just do research for the specific service they’re providing the client, but do research about their client for the organization. So the organization itself, the mother brain or the system of intelligence as we think of it, can do a better job of what it wants to do. There was this sensational announcement today from the CMO of Salesforce about marketing campaigns being over, and I think that had something to do with this idea that the future of marketing is this feedback loop of learning from our customers, and then upgrading our services for them.
Davyde: The way we like to look at it you’re sort of creating that bionic frontline worker; you’re equipping them to pay attention in a better way. And again I talked about this earlier, how could that frontline worker focus on what matters for the client and for the business, and how can the business learn about what they’re doing and how they could do it better? And how can the business create policies that promote better service and better results? So the real way to think about it is kind of like this big octopus with tentacles that is always absorbing information from the front line. It’s coming up into the mother brain, and then the mother brain is dispatching new ways of doing things down into the tentacle. So it’s almost like an innovation heartbeat. This is what we’re focused on. And people talk about business intelligence, they talk about CRM, they talk about all this stuff, but what I think they’re talking about is how can we make enterprises be organisms that can continually learn, and upgrade their workers. It’s kind of like this meta level thing. It’s not just about buying a piece of software that makes your guys better by 30%, it’s about creating a system that’s going to enable your guys to learn and get better all the time.
Davyde: And it’s going to enable you to see what’s going on and to design policies that are going to make your frontline workers better. So design policy is something I’m going to say again and again and again, because it’s the end recent thing. Tasks? Forget tasks. We can get robots to do tasks. What we want to do is design policies and then decide which policies to deploy. So in the context of wealth management, the enterprise might decide that they’re going to offer a suite of composable financial plans with insurance and wealth management and maybe even banking. I would suggest yes, banking. They’re going to provide these dance moves to their advisors, and then the advisors are going to be the ones to be there and choose and evaluate, and maybe personalized policies. Then they’re also going to be there to learn from their clients about what’s working and not working, so the organization can get better at what it does. It’s a two-way flow of information.
Craig: I like that, as it should. And that’s what you mentioned earlier, about a feedback loop.
Davyde: Yes, yes.
Craig: And would that be a way that AI could be used responsibly, or used to the best effect? By having that feedback loop and constantly adjusting how you’re using AI in your organization?
Davyde: Yeah. If you look at the people who are trying to do automated driving, they’re not just throwing robot cars out on the road. They’ve got human drivers in there who the robots are learning from. Right? So that’s a pretty solid metaphor. And there’s a reinforcement learning paradigm around that. But even before you get into reinforcement learning, I think there’s the idea that you’re humble enough to accept that there’s a lot to learn in this space. There’s a lot that you can learn before you even start applying machine learning to these decisions. So we can kind of illuminate the whole decision space of wealth management, so that we’re looking at the decisions that are made at every kind of vertex in the system, and what value those decisions have at each vertex. So at the client vertex, at the advisor vertex, at the branch vertex, at the vertex of people designing new products, right?
Craig: I think awareness is the commodity and decision making is the value added. So where does the policy come in?
Davyde: You can look at policy as a set of dance moves, right? You’re not going to have an infinite number of policies going up. You’re either going to look at the client’s situation and the guidance that’s been provided by the machine and you’re going to say yeah, I totally agree with the machine, let’s ship it and I’ll contact the client and get this done. Or you’re going to look and be like, I don’t understand the situation, maybe we don’t have the full data. Then you’re going to have to go and you’re going to have to do research for the client, right? Then you can do the policy. But the important thing is human lives are heterogeneous and complex, and these kinds of things cannot be automated at all.
Craig: I would agree that human lives are heterogeneous and complex, very true statements. And a great way to end our discussion.
Davyde: Wonderful. Well, thank you so much.
Craig: Davyde, you did a great job. We went from why AI is stupid into zen into systems of intelligence; we covered a lot of ground here.
Davyde: It was a big one.
Craig: It was good. I want to thank you for being on the podcast and for sharing all this, and everything about AI and why it is stupid.