Chris Hood (00:00): Hey everyone. Thanks for tuning in. By 2030, 50% of all jobs will be vulnerable to automation according to McKinsey and Company. This seismic shift creates opportunities and challenges as we prepare for a world increasingly influenced by artificial intelligence. In this episode, we are joined by Garik Tate, an AI strategist, investor and executive to explore the growing presence of AI in our personal and professional lives. Delve into its nuances and forecast what AI will look like a decade from now with the changing landscape of inputs and outputs. Grab a copy of my book, customer Transformation. This is your essential guide for customer success in the digital age. Learn how to adapt to your customer's ever-evolving needs and revolutionize your business strategy to achieve sustainable growth. Available now on Amazon, Barnes and Noble and my website, and of course, to support the show. Visit chrishood.com/show. Subscribe to the show on your favorite podcast platform. Follow us on social media or you can email me directly show@chrishood.com. I'm Chris Hood and let's get connected. (01:21): Connecting access. Granted, it's the Chris Hood digital show where global business and technology leaders meet to discuss strategy, innovation, and digital acceleration. 5, 4, 3, 2, 1. Your digital evolution starts. Now. Here's your host, Chris Hood. Let's give right into it. Garik, would you mind introducing yourself? Garik Tate (01:56): Yeah, thank you Chris. And hello everyone. I'm Garik Tate, an AI futurist and strategy consultant, and I get businesses acquired at higher valuations through AI development, hiring elite top talent and operational automation. Chris Hood (02:10): Awesome. An AI futurist. I actually want to dive into that because when I think about ai, we could go in any number of directions. It is a conversation that pretty much everyone is talking about in some form or another, but I actually want to dive into some of the challenges that we might be faced with regarding ai. And that could be personal, that could be professional, it could be with our businesses society. What is maybe your perspective starting there of just some of the challenges that we are faced with related to ai? Garik Tate (02:47): Yeah, so on a first principles level, there's a lot of things that people are worried about AI that doesn't worry me so much if we have smart people working on the problem and their interests are aligned with our interests. So one challenge that some people bring up is what if AI starts producing most of the content and therefore it's reading from the internet? So what happens if it's just consuming its own tail? And then where does that lead? And I think there is actually some cost for concern if smart people wasn't working on that because if AI self consumes, it starts to become less and less sane. What worries me a little bit more is when interests aren't as aligned. Also, of course the unknown unknowns, but probably the primary example of that type of challenge is the challenge of spam. So right now already many people, most of their communications is cold collar spam. (03:48): Things don't pick up. It's a whole world of people being spammed and overwhelmed with wrong information. And I think that as the marginal cost for producing information decreases with ai, that is really just being taken to a whole nother level. And I'm a little bit worried because I think that the best solution to that will be more personalized, ais that self filter. So let's say you have an AI assistant that knows your interest, that you have given your own data to and have given your own preferences to. So it can allow you to filter the data for yourself, but I feel like the interest of these larger corporations, their interests might not be as aligned with that. They would want to be the one doing the filter. That's been the model up to this point. Gmail creates the algorithm for the spam filter. Facebook creates the algorithm for what shows up on your newsfeed. And so I'm a little bit worried about that side of things of too much regulation coming in, which does improve a lot of things about the market, but it does also create a moat or a barrier to entry for the open source community or for individual actors working on their own ais. I would say that's one of the bigger challenges that I see on a first principles basis. Chris Hood (05:04): How long have you been working with ai? Garik Tate (05:07): We've been working with AI for about six years now. Our first projects were all AI based. Chris Hood (05:15): I think what you're saying is actually really interesting as spam and false information materializes, like there's nothing preventing me from saying the sky is green and if I say it enough and then I begin training an AI on it, individuals out there who are consuming this knowledge and then go and ask, say, chat G P T, what color is the sky? And they return, well, the sky is green. That's how AI is starting to learn false information. It's not just because it can't comprehend it. We're actually feeding it. That false information which is progressing through every blog post that is out there now talking about the sky is regurgitating what it's learned in. If it's wrong, we are doomed. Garik Tate (06:03): There's an interesting idea here. Have you heard of the Anna Carina principle? It's based off of a book I believe of the same name written by Leo Tolstoy, and the book opens up with a phrase, I believe it goes all happy families are alike, each unhappy family is unhappy in its own way. So it's this principle that to do something correctly or do something so-and-so well, there's a narrow band of possible ways to do that, that the ways to mess it up are endless. I think that this is actually a principle that helps AI pretty immensely because truth is self repeating so to speak, or it's something that's parallels itself. Obviously while falsehoods or people putting out misinformation, each of the misinformation is going to be a little bit different. So I think when we average data, we're actually really surprised at how often the correct data sifts to the top. (07:05): There's actually an example of this. A man called, I believe his name was Francis Galton. He was in the Victorian era, had some really terrible beliefs around eugenics and other things. He wanted to see what would be the common factors of criminals. And so he had actually, I believe he was the inventor of a certain photographic technique. We could layer images on top of each other, and he wanted to create a layered image of every criminal's face that they had on record to see what the average facial structure was in order to take eugenic steps to avoid that. And he was surprised that the average face of the criminals was quite beautiful. It was quite attractive, and because the answer is if you average any face on top of each other, it's always going to be attractive because the individual marks and asymmetries get washed out in the averageness. So average can actually be something that is a strength or a source of strength or a source of aesthetic or sort of add your adjective. And so I think that AI very often benefits from that because otherwise what you're saying here about the sky is green is a major issue. It still is a major issue, but this at least helps curb some of that. Chris Hood (08:17): I'm also thinking about services like Wikipedia that has some form of validation, right? There's those checks that are in place to confirm that the information is accurate. It's multiple sources, it's reliable sources. Now that doesn't prevent certain biases from still being introduced. I mean the entire world is filled with editorial biases and opinions from news organizations or political stances that are also influencing this language as it's going into AI to be taught. I'm not sure if there's really a way to prevent that, but I think the framework at least of being able to say we need multiple sources before we're just going to inject something as fact is a critical point. Garik Tate (09:07): Yeah, a huge factor moving forward for AI is creating a better reputation based system. So it's consuming essentially the entire internet. It has to put some sources above others, and I think that the reputation of some websites like the New York Times or Wikipedia or other places, their individual reputations is going to be factored in into the algorithm of what it consumes and what it weighs. Chris Hood (09:35): Yeah, it's no different than say ss e o results. You're at the top if you've got a better reputation, basically, if you are trusted, if you have more reliability. But on the topic of trust and perception, I think consumers on the outside definitely have a confused perception of what AI even is right now. Garik Tate (09:59): Yeah, the way that I think about AI is that it is doing to intelligence what electricity did to power. So power as defined as the ability to do work in the 1920s, electricity became widely accessible, or maybe it was give or take, around that time, the turn of the 19 hundreds, electricity became widely proliferated and people could then all of a sudden add it to their just about anything. A hammer would all of a sudden become a jackhammer. Your saw would become a power saw, you would be able to add electricity to just about anything. Carriages became cars, so on and so forth. At this point, we're doing a similar thing with intelligence where we can now add intelligence to just about any process or just about any tool. So if you take the jackhammer right now, we can develop a jackhammer that measures the point of impact and then adjusts how hard it hits to break the rock properly or to do its business. The places and applications to add AI is really anywhere where we have defined problems that require intelligence. Now they do have to be defined problems. We can't just ask it poorly worded questions. All of a sudden the way we phrase the questions, it becomes everything. But anywhere we can have a well articulated question and a well articulated problem that we need intelligence and we have data to feed it, that's places we can add ai. So that's really just about affects every business, every industry imaginable. Chris Hood (11:27): I've had a couple of conversations recently with people about ai, and it's amazing when I ask them, well, you tell me what you think AI is, and they lay out their opinion about it. I guess if I was to overly simplify it, they give us this sci-fi movie analogy of what AI is. I believe that the perception of what AI is today is actually what AI is potentially going to be in probably another 10 years. How accurate do you think I am there? Garik Tate (12:04): That's a really good question. When we're looking at ai, it's important to take a look at what are the innovations that have brought us to where we are today? And there's a paper that not a lot of people know about, but it's really the paper that in 2017 it kicked off the revolution that we're now really benefiting from today with the advent of between other things. That white paper was called Attention is All You Need, and that was a paper that I believe it was released by Google that came up with the idea of the transformer. So if you know G P T, the T stands for transformer, and the way that I like to describe it is if you were wanting to make an airplane, you would be studying birds, but birds wouldn't be airplanes. They're using similar principles, but they're not the same. (12:59): So the transformer is to neural nets as airplanes are to birds a neural net. It's a little bit closer to what the human brain does and other how biology solves a problem. Transformers takes that, uses the same principles, but creates a machine around it that vastly increases its ability to take in large amounts of data and parse it correctly. So we've really been benefiting from that one innovation over and over again. Essentially we've just been throwing more and more processing power at it and more and more information. All the innovations since that point have been iterative improvements. They haven't been watershed moments, and we need a few more major innovations like that one before we get to that point of true A G I or true, true intelligence, I think there's actually an example here of with auto G P T. A lot of people thought auto G P T was just the next logical step. (13:58): I don't know about you, but I personally have not seen one person who I know in my personal network that still uses auto G B T or use it for anything other than an experimental thing. But it really felt like in the moment, I think it was like month four after chat, BT became widely accessible. I think it was like four months after it felt like the next logical step. It felt like the next explosion, but it really is trying to take that next level where these ais become more self prompted and they can make their own goals, and we're not anywhere near that. We need another major innovation like the transformer, and that is famously hard to predict because it means you're trying to predict a true epiphany rather than iterative improvements. It could be 10 years, it could be longer or shorter. I think from following the AI industry, I think we do come up with about one every six or so years that could be sped up now that so much more funding is going into the space, but I would say otherwise we're probably two to six major innovations before we get to something like a true A G I. (15:08): And so if you plot that out, yeah, about 36 to 12 years is what we're looking at Chris Hood (15:15): In there. You were talking about the technology advancements and if we were to dissect this, I would argue that it's the natural trends that we are seeing in computing in general, more access, more storage, faster speeds, cloud computing. I think those natural changes evolutions have allowed AI to basically become more accessible to the average person. Thus Chachi p t is born, but there's the difference between recognizing what you are asking, a basic prompt, I'm asking you to deliver this, and really all it's doing is going out, performing a search, bringing back the data and composing it in some way. But if we were to think next level, we get into how it can understand what you're asking. There's a subtle difference between asking it to return information versus understanding what you're asking or building certain interpretations in it. That piece is not there in AI today, and until we can get to that as being the next level, we're still going through basically a, I'm going to ask a question in return, a search result. Garik Tate (16:36): Yeah, I think you're hitting the nail on the head here. The way we humans operate is from a mixture of a bottom up and a top down approach. So in thinking fast and slow, it's like system one, system two, we can think bottom up, which is just our experiences and recognizing patterns, and then top down is having a model of reality and then simulating reality by things through that model. So the way that these ais are working is purely bottom up. It's purely pattern recognition. It is seeing your question, seeing its current answer, and then guessing the next word in the sentence based on the patterns it's seeing from the web. Now, the interesting thing is it can guess patterns that don't exist on the web. You can ask it questions I've never been asked, and it will combine data from disparate places and guess at the next word, and it'll come with something as functionally good as we've experienced. (17:35): So it's showing creativity in effect, but if it was conscious, its consciousness would be something very alien from our, it's not having a view of reality, it's not understanding it's a prompted wake and then goes unconscious when we're not prompting it, it doesn't have a view of reality or it doesn't dream. I think that between us and that thing you described as 10 years out or so, I think that the obstacle we need to achieve is that top down view. It is that ability to get the AI to have a model of reality. And actually, I'll give one example that the listeners at home can try this. If you go into chat BT right now and type in an AA equation, let's say 5 billion, 300,000, et cetera, et cetera, times an equally ridiculous number, you would expect it to be able to give you the correct answer because, well, it's a machine machines like, Hey, calculator's not that hard, but it will give you an answer that is incorrect, that looks good enough, but if you check it against Google or a calculator, you'll see the number. (18:49): The first few numbers are probably correct, the last few numbers are probably correct, but it's a completely different answer. The reason why it is doing that is it doesn't have a model of reality. It doesn't have a picture of, now let me use a calculator and use this algorithm. It's just guessing the next number in the sequence based on your prompt, and there hasn't been enough examples of people asking it that specific question for it to generate an answer. Weird thing is it gets the first few numbers correct though, which means it's got enough similar questions to guess the first few numbers. So this method of feeding data to it and having it recognize patterns is going further than we ever could have guessed, but it's still just a one trick pony that one trick pony like solving a whole bunch, but it's not a top down view of reality. Chris Hood (19:37): Oh yeah, absolutely. Another comparison for you here is autocomplete gmail as an example now has autocomplete, so you can begin typing a sentence and it'll say it will automatically fill in what it believes you are trying to say. Now in some cases it's going to provide you with a recommendation that is better than what you were thinking in your head. All that is happening there is it's been programmed to say, make these recommendations in these sequences of words, and really this is the foundation of chatbots. Every chatbot out there is not self-aware of the situation. The chatbot is simply returning information based on what it's been programmed, modeled to tell you. And so if it's got 10 answers, it's going to provide you one of those 10 answers. Where we start to see the disconnect is that it can't, chatbots as an example, cannot interpret your intentions. If you have a problem, you have to spell out what the problem is and then it goes into the database, it finds the solution to that problem or a combination of words that fit that problem and provides the solution Garik Tate (20:53): Incredibly enough. I think there are problems are so much alike though is that you can describe a problem, not capture all the nuance, but it recognizes, hey, that problem. People who typically don't get enough sleep, they typically have this problem, this problem, this problem, and it can give you a good answer because it's collecting the civilizational wisdom, not just your problem. What you're saying is it's absolutely correct. Chris Hood (21:16): It's satisfying. I don't know what the percentage is, so I'm just going to toss it out because I believe it's higher than 50%, so maybe it's 75%, but it's satisfying 75% of the use cases, the 25 other percentage of use cases is what AI can't do for us today. Garik Tate (21:34): Yeah, that's a fair way of stating it. In your opinion, do you think that right now people are using AI in the wrong ways or asking too much of it? Or do you feel like right now we haven't been using it enough and need to find more use cases and need to be trying it out more? Chris Hood (21:55): I think that there's probably a lot of people out there that don't know what they're doing and they just bring it up and Hey, yeah, ai, I'm going to ask it a question. Oh, that's cool. You can look, ask, chat, pt some really interesting questions and get some really amazing results. Don't knock that. There's definitely a bunch of YouTube videos out there with people who are going through how to prompt engineer and how to use this for that, and most of those individuals are generating content for clicks. People are watching that content and trying to analyze it. As I said, I'm getting a call or an email a day or more saying that they're an AI expert and saying, Hey, I've got the only AI service that does fill in the blank. There's 1000 other AI sales tools out there. Garik Tate (22:46): What we're saying about AI becoming widely accessible, you can add AI just about any problem, so all of a sudden all these people can just add it to what they're doing Chris Hood (22:55): And all you have to do is tap on powered by ai. I think people are waking up and starting to realize that it is not everything that they think it is. On some level, I believe that people are using AI wrong because they don't really understand what it is. There are definitely a lot of people who are investing God loads of money for no particular reason, hoping that it's going to solve some magic thing for them, and I just don't think that they fully understand it enough to be able to use it, implement it, or make a value proposition for why they're doing it. Garik Tate (23:33): Yeah, AI is very good at solving a particular set of problems, and I think the people who are going to benefit most from this are those that focus on those problems first and foremost. He who has a hammer sees the whole world as his nail. So if people are saying, oh, I have this shiny new tool, how can I add it everywhere? And they don't have a clear problem in mind, then they're really, they're probably learning a new skill which will benefit their organization in the future. So I don't want to say it's a net negative, but certainly we're throwing too much money at it in the wrong areas. I think that the two areas that AI helps us the most is one is it helps individuals who need a buddy, a brainstorming partner or someone who can be supporting them getting over the blank canvas syndrome. (24:25): And the second area is it's very powerful at any area that has clear inputs and outputs. If you can give it all the context and a clear input and then you know exactly the type of output you want and you have that a lot of data in that area, you can do that, which I encourage business owners, look at your business process and find the bottlenecks, find the areas where you have too many errors, too much throughput that's having errors and is costing you capacity because if you add AI in that, that's going to free up resources and free up your capacity to focus on the more human problems, the more cutting edge stuff, and that's where AI is useful. Chris Hood (25:04): Yeah, I think on that input and output, the areas that I really see beneficial to businesses is definitely automation. Any task that you can replicate quickly and efficiently and get elements out to market faster through that process is great. And from a purely data perspective, being able to shift through masses amounts of data quickly to analyze it, understand it, to provide insights back to either your internal teams or consumers is definitely where you need to be investing that resource. And there's a lot of examples of that that have really nothing to do with generative ai. Unfortunately, there are thousands and thousands of startup businesses and entrepreneurs who have started a business off of chat G P T in the last six months, and I would say by next year at this time, the majority of them are going to be out of business because there's really no foundation to their business. Garik Tate (26:09): The AI companies are in some ways providing utility where other people can plug into it and then funnel that utility into their own products. But if that's the only thing you're doing, then you're not adding, you don't have any competitive advantage, you don't have any unique value that is unique to your own business. Chris Hood (26:28): So based off of what you were just talking about, if we were to summarize this for our listeners, executives that are out there, business owners, what are some of the things that they should potentially be focused on in order to help them be more successful as a business? Garik Tate (26:45): I advise our clients to be breaking down their business, and actually I will first just qualify this to say that most of my clients are small to medium businesses, often founder operated and what their needs are are very often more reliant on these generative models. If you are leading a larger organization and can really invest in cleaning up your data and formatting it, then these analytical ais are immensely powerful and really anybody who can create a spreadsheet can be using these. But I often find that smaller business owners that maybe do not have years worth of data that do not have pre-formatted data, it's a really big task for them to be gathering all that data into a single place and to be putting it in a way that good insights can be mined off of it. More of a short-term win that a lot of people can get is by making a diagram of their business process starts all the way from the client acquisition to client onboarding to fulfillment all the way through. (27:52): And better yet, put metrics to those things. Say where a customer's falling off, how many hours are we spending at each stage of this fulfillment? This is six Sigma style style exercises. And when you have that in front of you, find the handoff places where things go wrong, find the places that connect different departments together where mistakes are being made and then ask yourself, where here could we have a clear input and a clear output? Because those are the areas you can add ai. And if better yet, find those places that have a lot of throughput and also have a lot of errors because by adding AI there, you're going to be able to increase the QA and increase the quality of the transfer and also make it more standardized. Plus, by having a high throughput in that place, you're going to be able to really test the AI over and over and over again until you get it just right exactly the way you want it to want it to be working, and then you free up those resources. So that's the typical process that we walk our clients through, and it's something really anybody can do if they can approach it from a systematic point of view. Chris Hood (29:02): I love the idea of having AI help you with process improvements. It can often offer insights or ideas that you may not have thought of and also typically would be an unbiased opinion. How can people get in touch with you? Garik Tate (29:17): Certainly less biased, and thanks for having me. This was a lot of fun for the audiences. I would say if you're looking for a partner to increase your business valuation or you're looking to get acquired, just reach out to me on LinkedIn, say that Chris sent you and be happy to jump on a call and discuss. And on top of that, if you're looking to start a new venture with ai, but you think that you have a unfair advantage, you're not just plugging AI into something with no moat. We're looking to start new companies with a plan to exit from them in two to three years. So hit us up. And lastly, I would say if you are building a product or building a new application, you can check out my company's website, Valhalla, do team. Chris Hood (29:58): Appreciate it so much. Thank you for again the insightful and wonderful conversation. Garik Tate (30:04): Thank you, Chris. Chris Hood (30:06): And of course, thanks to all of you who are listening. If you like what you heard, please subscribe to the show on your favorite podcast platform and leave a review. Your feedback helps us improve and grow. And if you have any questions, comments, or ideas for the show, you can connect with us throughout social media and online at Chris Hoods show. And please share this episode with your friends, family, colleagues, or anyone else looking to grow their business and start their own digital evolution. Until next week, take care and stay connected.