33 – How the Future of AI will impact businesses and society as a whole with Daniel Hulme
Satalia CEO Daniel Hulme discusses the future of artificial intelligence, and how it will impact businesses and society as a whole.
Hear about exciting new applications for AI that are already being used by companies like Tesco and PwC, and find out how you can apply AI in your own business. Satalia is on a mission to make innovation accessible and free for all. Join us as we discuss the implications of this abundance.
Learn how: 👉 AI help humanity.
👉 AI can become a catalyst for an abundant world.
👉 AI is be perceived as ethically agonistic.
👉AI can help you, your team and your organization.
Get in touch: https://www.satalia.com/about or reach out to Daniel Hulme on Linkedin. About Daniel Hulme
Daniel Hulme (PhD) is a leading expert in Artificial Intelligence (AI) and emerging technologies, and is the CEO of Satalia. Satalia is an award-winning company that provides AI products and solutions for global companies such as Tesco and PwC. Satalia's mission is to create a world where everyone is free to innovate and that those innovations become free for everyone.
Daniel is also the Chief AI Officer for WPP and helps define, identify, curate and promote AI capability and new opportunities for the benefit of the wider group and society.
Having received a Master and Doctorate in AI at UCL, Daniel is also UCL's Computer Science Entrepreneur in Residence and a lecturer at LSE's Marshall Institute, focused on using AI to solve business and social problems.
Passionate about how technology can be used to govern organisations and bring positive social impact, Daniel is a popular keynote speaker specialising in the topics of AI, ethics, technology, innovation, decentralization and organisational design. He is a serial speaker for Google and TEDx, holds an international Kauffman Global Entrepreneur Scholarship, and is a faculty member of Singularity University.
Daniel is a contributor to numerous books, podcasts and articles on AI and the future of work. His mission is to create a world where everyone has the freedom to spawn and contribute to innovations, and have those innovations become free to everyone. He has advisory and executive positions across companies and governments, and actively promotes purposeful entrepreneurship and technology innovation across the globe.
This episode is sponsored by · Anchor: The easiest way to make a podcast. <a href="https://anchor.fm/app">https://anchor.fm/app</a>
Send in a voice message: https://anchor.fm/hihellosura/message Support this podcast: <a href="https://anchor.fm/hihellosura/support" rel="payment">https://anchor.fm/hihellosura/support</a>
*This Transcript is Autogenerated
Hey there and welcome to the hi. Hello Sura show. I'm your host Sura Al-Naimi today on this episode, we are joined by Daniel Hume. Leading expert in artificial intelligence, emerging technology. And CEO of Satalia. And I says, Holly is an award-winning company that provides AI product and solutions for global companies, such as Tascos, which is, uh, you know, a little bit, but like Publix, if you're here in the states and PWC,
So Stalia's mission is to create a world where everyone is free to innovate and that those innovations become free for everyone.
So the two decades, Daniel has been passionate about how technology can be used to govern organizations. And bring positive social impact to us. In our conversation, we unpack the building blocks of artificial intelligence. We talk about how mathematic. Lee. Complex challenges can be solved through the power of artificial intelligence.
So without further ado. Daniel welcome to the show
It's great to be here. So thank you. Thank you so much. I know that we have been, uh, trying to navigate our schedule. So this really is a treat indeed. Uh, so I wanted to kick us off because sort of starting really at a basic level in terms of, uh, you know, the 1, 2, 3, if you're explaining what you do, uh, to somebody else that's really unfamiliar and then we can kind of exponentially.
Uh, after that. So if you were describing, um, your company, uh, and, um, you talked about making innovation available to everyone and free for everyone. Um, but can you elaborate on that? And, um, you know, really starting from the simplicity of AI, Yeah, absolutely. So a AI or I guess what people are calling algorithms or mathematics can help remove the frictions that exist in companies or they can improve processes.
So they essentially can make organizations more effective, more efficient, and Satalia builds innovations that involve mathematics, um, algorithms to improve the effectiveness efficiency of organizations. Um, The second part two is that I'm really interested in how. to Innovate as frictionlessly as possible. So not only do I want to build innovations, but I want to remove the friction from the creation and dissemination of innovations.
Um, I'm interested in creating new organizational structures, using AI, using new organizational paradigms to get innovations to market fast as possible. And the three is that I'm. I want to try to make the cost and of goods. And the dissemination of goods as cheap as possible. So if you can, if you can enable organizations and people to innovate frictionally frictionlessly, if you can enable them to remove frictions from getting goods to people, then you can create, make the cost of those goods.
Very cheap. And, and I want to ultimately create a world of abundance, a world where all of the goods that we depend on are food, our healthcare education, energy, as free as possible. And, and I think to do that, you need to use technology to remove friction from, from the innovation process. Um, and, and that's, that's what, I'm all.
Well, that sounds absolutely incredible. So I think to bring it to life, when you talk about, um, things being frictionless, um, or, um, things, um, can you, can you bring that to life for us because this, you know, it sounds, oh yes. Let's make something frictionless. yeah. You know, organizations are made up of processes and structures and each one of.
Processes usually mean some sort of movement, some transaction of information or physical. Good. And I guess historically you might have human beings doing that, or you might have systems or computer systems doing that and AI and new advances in technology. It can enable us to do that much more effectively.
So for example, um, uh, if I've got some staff members that I need to allocate to jobs, Then, I guess historically I would have a human being, trying to figure out how do I allocate those people to those jobs to make sure that they're working on the projects that align with the values of the company that are good for their career development that are good for clients, all that kind of stuff.
Now that's actually a very complex math problem. So maybe just a geek out a little bit. I've got five people that I need to allocate to five jobs. There are 120 possible ways I can allocate those five people to five jobs, five times, four times, three times, two times one. If I have 15 people to allocate to 15 jobs, there are now over a trillion ways to allocate them 15 times, 14 times, 13 times 12.
If I've got 60 people that I need to allocate jobs, there's now more possible accommodations than there are atoms in the universe. Most of the clients that we work with have. Hundreds or thousands of people that they need to allocate the jobs. So using a human being to solve that maths problem is, is a waste of resource.
It's a waste of time. We can, we can use algorithms to remove the human from that process and we can allocate those people to those jobs. Much much more effectively and efficiently. And that's just one example, any allocation of resource, whether you're allocating people to jobs or whether you are rooting vehicles to do deliveries, or whether you're trying to spend your marketing money to maximize yield or allocating your sales people to, to shop floors.
All of these are these types of complex decision problems, and we can use algorithms to solve these problems much, much more effect. When you talk about vehicles, I think one of the first projects you did and, and correct me, um, is the, the ups, uh, roots. Is that, is that correct? It was test Tesco. So Tesco is the big retailer.
Yeah, Tesco's the biggest retailer here in the UK. Um, they have pioneered online delivery, so they're a big grocer. You, you, you go to their website. Or their store, but you go to the website, you fill up your basket and then you click on a button to, to say, show me the delivery slots for this coming week.
And, and, and, and the idea is you choose one of those delivery slots. And, and then hopefully that, that, that grocery, that, that grocer turns up at that time. Now sitting behind that are multimillion. Pound or multimillion dollar systems that are trying to optimize those vehicles so that you're minimizing fuel time, that you're making sure that you're predicting how long it's gonna take to drive to that customer, how long it's gonna take to deliver to that customer.
Um, there are, there are many, many algorithms that sit behind that to make sure that you are offering as many slots to those customers. And then you're delivering on that, that promise. And again, historically, most organizations are either doing that manually or they're using. Antiquated algorithms and, and test Tesco claim that what we built for them was the best last mile delivery solution in the world.
Um, off the back of that. So having built that innovation for that company, we've built our own last mile delivery solution that we've also taken to market. So we're looking at now taking that innovation, enabling other organizations to use that innovation to become more frictionless. You did that was that back in 2010, this is a while back, correct?
It was 2005, 2006. Yeah. Okay. Yeah. Yeah. So it was relatively recent, although that was our first big client and we, we scaled a lot. And the example around people allocation is, is one of our clients is PWC, which is obviously a big account global accounting firm. And they've got 5,000 auditors here that they need to allocate to projects and we've taken those.
Innovations and we we've deployed them in other markets. So we do delivery for DFS, which is a big furniture retail company. Here. We do delivery for, um, Walworth, which is a big grocery Del company in Australia. We do staff allocation in stores for DFS, um, using our workforce solutions. So what, what we want to do is repurpose those innovations and get them making a positive impact on the.
Now you are a, um, as well as Electra at LSC, uh you're um, a faculty member for singularity university. So for our, some of our listeners who, who are not familiar, can you explain what singular singularities at mission is? So sing singularity is, is, is concerned, interested about the. Impact that exponential technologies have on society.
And, um, AI is regarded one of those technologies. Um, these actually these, these, these, uh, examples that I've given you are called exponential problems. They scale very, very badly, very, very quickly. Um, and, uh, and, uh, singularity of kind of acknowledged that, that new. Technologies that have emerged over the past 10 years can now have a, a massive impact on organizations on governments, on society.
Uh, and they do a huge amount of thought leadership, um, uh, education around, around the impact of these technologies. So, so I'm one of the speakers for, for singularity. And so when you're speaking to these organizations, uh, and obviously, you know, a lot of companies are familiar with, you know, the term AI, uh, but it can become quite a buzzword, uh, in certain organizations.
So what are some of the things that, um, a really tangible in terms of an organization should really be aware of, uh, and should, um, There's an invitation, uh, to have that as an opportunity, you know, to achieve some of the things that you just described in terms of innovation, getting out into the world, making things frictionless, what are, what are some top things that come to mind that people, uh, leaders should be considering?
Oh, was I do a huge number of, um, talks about, about this subject. So most people currently are thinking of AI as getting computers to do things that humans can do. Um, over the past decade, we've managed to get machines, to recognize objects in images, to correspond in natural language like chat bots. And when we get machines to behave like humans, because humans are the most intelligent thing we know in the universe, we then assume that that's AI.
And, and actually I would argue that humans are. not Intelligent. We're, we're very good at finding patterns in about four dimensions. And we're very good at solving problems up to about seven. That there's a better definition of AI that comes from a definition of intelligence. So instead of using humans as a definition of intelligence, the, the definition of intelligence, which was actually my, my masters in PhD, the.
Um, is goal directed adaptive behavior. So goal directed in the sense we're trying to achieve an objective. We're trying to allocate our staff to maximize utilization, or we're trying to reach our vehicles to maximize the number of deliveries or spend our marketing money to maximize reach. Um, so you have to have a goal and usually it's a complex goal.
Behavior is how quickly I can get. Answer that question. And we've just kind of discussed that most of the problems that we're dealing with in industry are, are exponential. They, they, they they're infinite in size. So being able to answer that question quickly and well, um, is extremely hard. Um, and, but the key word is adaptive, gold directed adaptive behavior.
Ultimately, what you want to do is build systems that can make decisions. Um, learn whether those decisions are good or bad adapt themselves, or next time you, you make better decisions. And if I'm being very honest, I haven't, I haven't really seen adaptive systems in production. So I don't think that anybody's really doing AI to the true definition of intelligence.
There are, there are some very interesting technologies that have appeared over the past. Five years. And one of the things I think leadership and any organization needs to be aware of is how to match those technologies with the frictions that exist in their organization. What I've seen over the past several years is a massive misallocation mismatch of those technologies with the problems.
Um, and I can elaborate that if you think that would be useful. Sure. Yes, absolutely. So, so, so I think there are five different flavors, uh, or applications of these types of technologies. So the first application is essentially human task automation. So there are some relatively basic tasks that humans do.
That can be automated using technology. What it's like, for example, taking information from a PDF and putting into a database, they, they, they're very narrow, very specific, repeatable mundane tasks that humans do that can be automated using actually relatively simple pieces of technology. So that that's one area.
Um, the second area is the. Um, representation of human beings. So deep fakes advances in natural language models allow us to now build interfaces, whether they be visual interfaces or natural language interfaces that essentially look like human beings and behave like human beings, which is different to the first part.
And that also raises lots of interesting ethical and, and, and, and various to the question. So. I might actually be a bot. You, you wouldn't ne necessarily know, know that I could be sitting in my living room now enjoying myself, and you could be talking to a bot. Um, so that's around human representation, um, of, of, uh, of, of AI.
The third is generation of content. So we are now able to use these technologies to generate texts, generate imagery in ways that we've never been able to do before. Whereas you might have to used to have to go and film a forest. You can now autogenerate forests where I had to write some copy. I can now auto generate copies.
So, um, you might argue that's linked to number one, which is task automation, but, but the technologies are quite different and, and much more sophisticated to kind of. Generate content. So that's the third, the third, um, part, the fourth part, which is what I guess a lot of people are currently calling AI is machine learning.
Machine learning is fantastic at extracting complex insights from data. So in, in ways that human beings will, will never been able to do so, we can now get these technologies to look across lots and lots of different data sets and then, um, uh, extract insights and help us understand the world in ways that, that, that, again, human beings have, have never been able to do the, the fifth.
Set of technologies are around decision making. So I've already given some examples there around allocating staff to projects. Um, these are complex. What I call optimization algorithms again, is a different flavor of technology. So each one of these things that I've described each one of these five applications.
Actually is a different set of technologies, a different set of approaches. And it's really important that organizations understand what are the right technologies to apply to their problems, Chris cuz what, what, what I'm seeing is a mismatch, a mismatch of skills, a mismatch of technologies. And I'm worried that there's going to be a bubble that will burst in AI because people don't understand how to allocate these, these tools in the right way.
Mm-hmm mm-hmm and then because they're not allocating them, then they might be dismissed as a technology that's worthy of investing. Indeed or over hyped or et cetera. In fact, again, I would controversially argue that companies don't have machine learning problems. They have decision problems and, and decision making.
For the most part they're decision problems and decision making is, is a completely different field in computer science than machine learning. They are discreet mathematicians. They used to be called operations researchers, I think over the past five years and over the next five years, companies have been hiring.
Data scientists, statisticians machine learning experts to try and solve these decision problems and it, and it's the wrong approach. And unfortunately, I think it was gonna, it's gonna, Calcast a shadow on, on the efficacy. A lot of these technologies. That's really interesting. So when you talked about, um, you were talking about something at the beginning and it made me think about a previous conversation we'd had around hack.
And, um, how does that relate to AI? Yeah, so, um, my, my long term view is that if we can, if we can remove friction from the creation and dissemination of goods, things that we find are our comforts or the things that we depend upon. Coffee and all that kind of stuff. If we can, in friction, by the way, usually means human labor, right?
It usually means removing humans from the loop. And there's a massive concern about the impact that these technologies will have on work. Um, we, we know that we can use these technologies to remove tasks, uh, to make things more efficient at the moment. We're not really using the technologies to remove whole jobs, but I suspect that over the next 5, 10, 15 years, the pressure for organizations to.
Increas profits to reduce costs means that that we will remove whole jobs using these technologies. And the concern is that we might have mass technological unemployment that will cause a huge amount of social unrest. And now people have said, well, we've always been able to create new jobs. And I think the reason why it's different in this, in this new industrial evolution is.
um, is because any new jobs that we create, we can probably build AI that can automate them. So people won't be able to retrain fast enough to get those new jobs. Now that is a concern that people have it's often referred to as the economic singularity. It's actually a point point in time that we, we can't see beyond it was coined by a very good friend of mine called Calum chase.
We, uh, Calum and I, and, and others believe that we should be accelerating automation. We should actually be accelerating the removal of, um, friction from these goods. Because if you can, you can then create abundance. You could create a world that people are born. They don't have to worry about working to feed themselves, to educate themselves.
They, they, they. All of that's available for free. This isn't, this isn't taxing the rich to give to the poor. This is just making the world better by giving people access to those goods. And, and the aspiration is that by freeing people up from those economic constraints, they're able to then go and contribute to humanity in, in ways that they want.
Now, it might not be paid ways. They might not be paid to do it, but they don't have to get paid because the world is abundant. Now people have said, well, what would I do, Daniel, if I didn. Work, you know, I'll what would I do if I didn't have a job? That's what sort of defines me. You know, most of the people that I know that have.
Independently wealthy that have become high net worths that don't have to work. They're not sitting around bored and depressed. They're using their time, their freedom to contribute positively to humanity. I believe that we all have an innate desire to contribute to. The, the, the positivity of each of these's lives and, and the lives of future generations.
And I believe that that's actually, what makes people happy is, is, is, is, is contributing somehow to, to, to the, the, the progress of humanity. And, and I want to try and free as many people up to be able to do that as possible and not dictating saying that we all should be contributing to humanity. Half of the world is in poverty, half of the world.
Most of the people that, that, that are alive today, they don't have the freedom to be able to go and do that. And I think that AI could free people up from those like economic constraints so that they can then allocate themselves to however they want. Mmm. Oh, what was that book that we talked about? Um, that BF Skinner book, did you end up reading it?
I didn't, but I've got it on my list. I, I still got it on my audio, but, uh, yeah, some of the things that you say you think of, you know, previous conversations, but that makes a lot of sense because then you're able to, whether that's and, and everyone has such a unique way of contributing their gift. Right indeed.
Uh, whether it's musically or whether it's solving problems in other arenas. Uh, and I should say that I should should say that we, you know, I, I, I can, machines can play chess better than me. They can probably write poetry better than me and create music better than me, but it doesn't stop me from enjoying doing that.
And, um, and so, so I, I don't think it will stop us from doing the things that we enjoy. I think it will freak more people up to find those things. And, and going back to that book that you mentioned, you know, these, these concepts have. Thought about for, for. Probably centuries. And, um, and I think what's interesting now in this time in human history is that we have the tools available to allow us to do this.
Maybe we don't have the structure for humanity structure of society to be able to enable to do this. But I am positive that, that, that we will move the world in a positive direction. I, I think if we can free as many people up, um, to be creative, to innovate, um, I think that they will come up with ideas to continue to.
Enable the world to be better and, and free more and more people, but it was like a virtuous circle. The more people we free up, um, uh, from economic, um, constraints, the more positively they can contribute to humanity, the more people that they will unlock to be able to then contribute positively to humanity.
And I do feel like that's, it just might not feel like that at the moment with all of the things that are going on in the world. But I think generally that's the trajectory that I feel the world is. So that makes me think of a couple of things. One, the ethics in AI, cuz I know that you contribute really, um, a lot to that, especially, um, in, in the Def defense sector.
Um, and then the other thing that came to my mind is what other limiting beliefs might we currently hold? Um, that might kind of impede this progress. So one of the ones that you just mentioned was well, like, um, You know, I need a work for example. Right. And that's, that's based on, um, a well view right now where yes, you do need to work because you need to bring money in so that you can survive and pay your rent, et cetera.
But what if that was abundantly available and you could have whatever you wanted. So then what would you do? Are there any other, um, limiting beliefs or constraints that come to mind that if we were able to reframe them would kind of accelerate our. I think when it comes to ethics, I, I, I have a relatively controversial stance on this.
I think there's a lot of noise. There's a lot of misunderstandings in industry by top level government officials that, that misunderstand these technologies. And so my controversial view is that there is no such thing as AI ethics. Um, and so AI is a tool, a technology that enable human beings to achieve their goal or their intent.
So human beings form an intent, which might be for example, maximize the utilization of my employees. I've got 5,000 employees. My intention is to allocate these to jobs, to maximize the, the value to my business. That's the intent or, um, I might, I might have an, I might be a, like an Uber ride hailing company.
And my intent is to maximize my. My, um, revenue from search prices. Okay. So, so that's the intent it's set by a human being. The AI, the technology is then trying to achieve that intent. The a, the, the definition of ethics is the study of right and wrong. And it it's, it's the intent that needs to get scrutinized from a right and wrong perspective.
Not the AI. The AI is just doing its job. The AI in the Uber example might identify. When the battery level is very low, that people are willing to spend more money on their ride. So essentially you are vulnerable and I'm going to exploit that vulnerability. And so I'm going to charge you more for your ride, cuz I know that you're willing to pay for it.
Now it's up to the ethics committee to determine whether that's acceptable or not. They could easily use the battery data, not to get more money from those people, but their intent could be to prioritize rides because they're vulnerable or their intent could be to identify cars that might have charges in them.
So it's the intent intended use of these technology that needs to get scrutinized from ethics perspective, not their AI. And I know that there's a lot of concern around bias and that people are building bias into these systems. And unfortunately, most of those conversations are being had by people that don't understand.
These technologies, uh, yes, these technologies are bias, um, and they are inherently biased. They, they, that's just how they work. The, the goal is to make sure that they're, the bias is minimized as much as possible, that where it has the ability to affect people's lives, that you're controlling around that.
And, um, the, the other question you have to ask yourself, is it, is it better than the human being? So whilst there is bias, is it still better? Is it still being able to make a decision better than human being and. And for the most part, the answer is, is, is yes. So, um, so we need, if I build a system that behaves in the wrong way, I it's biased, then that's a safety problem.
It's a design problem. It's not an ethical problem. And, and I think there's too much confusion. There's too much noise. There's too many people selling consultancy scam Mon off the back of, uh, of not understanding these technologies. Mm, thank you. Thank you for sharing that. It's like a whole new world for me.
I appreciate it. um, So recently, I think in the last couple of years, I, I get, I get my dates wrong is why I'm not a mathematician. Um, but, um, and that's not really related, but your company has experienced a massive growth. Uh, and so with that growth, I'm curious about. Um, has that shifted any of your focus or has that opened up any new, uh, opportunities that you're really excited about?
Yeah, we, we, we did really start to grow it. I mean, the company's been around for a long time. We were very early. Uh, we've been talking about AI before. Before people do the term. And, um, uh, it's only really in the past six or seven years, have we really started to, to, to grow. And, and actually we were acquired, um, 10 months ago, we were acquired by WPP, which is the biggest media agents in the world.
They, they produce, I don't know, a third to a qu, a quarter of the world's content. And, um, We had a lot of companies interested in us. Uh, you know, most of the big technology consultants is, uh, we've been working with WP for, for quite some time. I really, really like the organization. There are loads of synergies between what we do and, and what they needed.
Um, maybe it's allocating staff work, uh, to more effectively to the 120,000 people that they have. Maybe it's using technology to make the, the whole marketing media process more efficient. Maybe it's taking AI to their customers. Uh, but, but I also get the, uh, the opportunity to be the chief AI officer for WPP.
So I'm thinking very deeply about. Impact, uh, that these technologies have, particularly in marketing, marketing, and media, um, and trying to build frameworks that hold WPB to account, but also, um, our partners, uh, and, um, uh, customers to make sure that we're using these technologies in, in, in, in the most, uh, uh, coolest way instead of creepy ways.
Coolest way instead of creepy ways. Is that a t-shirt? probably yes. I think somebody said, you know, what, what are defines if a piece of technology is gonna grow up to be, uh, and then they give two examples, you know, one is like Dr. Evil and the other one is George Clooney. And, um, , and, and to your point, it's not the technology, but it's the way that it's being used.
Um, so I, I think. So it's really a fundamental misunderstanding that people have. Um, when technology is growing in whichever field, it's absolutely it's down to us, it's down to us to decide how the technology should be used. And, and I have to say, you know, there's sometimes that we that'll behave in ways that we don't predict it will have an adverse effect, you know, going back to this idea of, of staff allocation.
If, if your goal is to maximize utilization, Maximize the allocation of that staff to maximize revenues, then AI can be incredibly effective at that, um, in, in ways that no other technology has been. But what you'll find is that will have a knock on effect. People will be working longer hours. They'll be driving longer distances.
Maybe customers won't be seeing. The same faces that they want to see. So you then have to adjust your objective to not just maximizing utilization, but maximizing career development, maximizing continuity for clients minimizing travel time. So, so, um, So these technologies, if deployed incorrectly can have an adverse effect, the intent still might be good, but it can have an adverse effect and you need to be able to adapt very rapidly to those adverse effects so that you're not creating long term harm.
Uh, and, and that that's something that, that we do on a, on a day to day basis. But, but ultimately it comes down to the intent. Um, are we using these tech technologies to benefit people's lives? Yes or no. Can we make sure that we can adjust and adapt them? If we see adverse effects? Like the, the, the battery example?
Mm there's like, so there's like a disruptive technology. So for example, self-driving cars, one of the adverse effects. Might be that there's well, because there's less accidents. There might be less, um, organ donations, you know? And so then there's less, uh, availability for people that, you know, need operations or organs.
So is that a, a type of mapping that you are continuously doing in terms of the impact of, um, something that you're recommending for a client? Absolutely. And I, I can't name the clients, but we are absolutely trying to apply these technologies to the healthcare industry to, um, to, so, so not just helping, you know, retail organizations and what that, to, to remove efficiencies, but working with the names, um, that, that do have a material impact on people's lives.
And, and it's something that we're really passionate. Mm. And so one of, one of the, the projects that we're we're working on, um, is around care at home. So off the back of our logistics solution, our delivery solution, we, um, we've built what is called a field service solution, where you get skilled workers essentially going to visit.
You know, things that need fixing people that need fixing. And so what we're doing is we're looking at trying to take that field service solution to providing care at home. So you've got the right nurses going to people's houses that, that, that can support, um, the, the, the, the, the patient's needs. Now things like, um, uh, resource allocation and, and some of the other examples that you've, uh, shared, uh, really resonate.
Are there any others that, um, I feel really human, you know, like psychologically impactful, uh, and all of these are psychologically impactful of course. But I think you'd mentioned before some particular projects that might help with like, for example, mental wellbeing. So we, we do quite a lot of stuff experimentation inside Satalia to.
So, so the, the hypothesis is if we want to be strong at providing services, assets, products, any innovation, and getting it to market as fast as possible, you need to create an organizational platform. That's fluid. Most organizations when they start out and scale, they, they, they typically organize themselves around either a product or a service.
And then they have to diversify their revenue streams and they try and provide kind of high hybrid, um, uh, offerings and their organizational structure doesn't allow for that. So I've been very interested in, in creating. An organizational platform that allows for fluid working the right organizational structure to, to emerge according to the innovation that needs to get to market.
And that challenges, roles, it challenges how people are paid. Um, but all of the, the. Ways that we implement these things in my company are grounded in psychological safety. So for example, we used to get people to make public recommendations for their salary, and then people would then vote on whether those salaries should be reduced or increased or kept the same.
And, um, The that that was better than a manager because your peers were able to kind of determine your salary as opposed to people that might be less informed, but it turns out that some people feel very uncomfortable about declaring their, their salary. So we learned a lot from that whilst it was a better.
The mechanism, it created some psychological and safety, even though, um, having a manager often create psych even more UN psychological and safety. So we've adapted that now to using peers, to determine your skill levels, to determine your performance, which isn't too far around, away from how organizations do it.
But we are looking at using AI to create what are called. Organizations or liquid democracies. So if I've worked very closely with you over the past year, if I'm very knowledgeable about your domain, I might have more votes for your career development and your salary than somebody else. Uh, so we can use AI to identify who are the best people to make these decisions, and then power them with a, with a waiting.
According to the day, I will come up. Mmm. Oh, I can't wait to hear more about these internal experiments as in when you can share them. indeed. Well, I really appreciate your time. Is there anything else that you want to share with the listeners that they should be aware of or something that she should chat into or experiment with?
Um, I mean, I've covered a lot of ground there, but, um, you have, I guess, I guess there's a. Just just that there's, there's a lot of misunderstanding around, around AI. And I know that every AI company is going to say that, but, um, but I've got 20 over 20 years of experience in, in, in applied AI and I am deeply worried that there's a massive MIS investment going on.
So the first thing I would do is, is my gift to everybody is, is, is to give, give my time. So if anybody wanted me to come and do a lunch and learn or a brown bag or, or whatever, to. To infuse to educate people about, about what these technologies are and aren't, um, to their organization, then, then please just reach out.
Well, that's just a perfect segue to my next question. uh, what is the best way to get in touch with you? So LinkedIn, uh, Daniel Hume, you should be able to find me or, um, email, um, Daniel satalia.com. Amazing. Well, Daniel, thank you so much for your time. As always, I'm looking forward to our next conversation.
Likewise. Thanks, sir. Thank you.
Well, listen is. Thank you for joining us today as always. I really welcome your comments and the community as your reflections, inspirations, and questions. Direct all future conversations.
Thank you 📍 for listening to the HiHelloSura, Show. I am your host Sura Alnaimi