This episode is sponsored by the CIO Scoreboard
Kevin Kelly, I think, may be the smartest person in the world…and I am only half-joking. I have been deeply interested in his work, and his thinking has influenced mine.
His 2010 book What Technology Wants changed my perspective on Information Technology in 2010; his book Cool Tools is a compendium of the best tools cultivated from his years of research. Among other resources I like is his blog post 1000 True Fans; his latest book just released this summer titled The Inevitable; and his podcast interviews on London Real, Tim Ferriss, Lewis Howes, and Chase Jarvis.
I asked him to come onto the show to get into topics that I had not heard him dive into from the perspective that I was curious about… I know you will be too.
Major take aways from this episode are:
1. If you were the leader of a 1000 person company, what would you ask your direct 5 reports to do?
2. What skills are needed to teach kids to handle this new future in regards to learning and failure?
3. How Kevin Kelly would handle ethics and governance as we program Artificial Intelligence.
4. How humans will become more ethical and moral training AI.
5. Kevin’s AI philosophy is very unique and will help you understand the role of AI working with other AIs.
6. His opinion on the difference between AI, Machine Learning, and Deep Learning.
7. The importance of being a newbie and an attitude of being a lifelong learner.
8. The difference between learning, how to learn versus finding how you learn that is unique to you.
9 . The skills enterprise leaders need to have in regards to how to fail.
10. The important skill of looking at the edges.
11. “In a world of abundance the only scarcity will be our attention,” Herbert Simon.
I have linked up all the show notes on redzonetech.net/podcast when you can get access to Kevin Kelly’s books and publications.
About Kevin Kelly:
Kevin Kelly is Senior Maverick at Wired magazine. He co-founded Wired in 1993, and served as its Executive Editor for its first seven years. He is also founding editor and co-publisher of the popular Cool Tools website, which has been reviewing tools daily since 2003. From 1984-1990 Kelly was publisher and editor of the Whole Earth Review, a journal of unorthodox technical news. He co-founded the ongoing Hackers’ Conference, and was involved with the launch of the WELL, a pioneering online service started in 1985. His books include the best-selling New Rules for the New Economy, the classic book on decentralized emergent systems, Out of Control, a graphic novel about robots and angels, The Silver Cord, an oversize catalog of the best of Cool Tools, and his summary theory of technology in What Technology Wants (2010). His new book for Viking/Penguin is called The Inevitable.
Yes so AI is what we can’t do yet. As soon as we can do it, we call it machine learning or expert systems. It’s a ridiculous thing. It’s basically AI, it’s all the things we want to do and machine learning and expert systems and all the kinds of things that we can do. As soon as we do it, it’s no longer AI, it’s something else. AI is always going to be receding in front of us, but yeah, I mean anybody who would call Siri or Alexa AI 50 years ago, but now it’s just something else.
Well, Kevin, I want to welcome you to the show today.
Kevin: Well, it’s a real delight to be here. Thanks for having me.
Bill: Well, I found … I wanted to spend a couple of minutes just exploring a couple of the concepts that you covered in your new book. Also, it was really interesting; I have been an avid reader of not only your blog, but also your book “Cool Tools” and your book “What Technology Wants.” I’ve been tracking this for a couple of years now just kind of digesting your thinking. One of the interesting pieces I pulled from your book is that the “What Technology Wants” was interesting, because your book that you just released called “Inevitable: Understanding the 12 Technological Forces That Will Shape Our Future.”
[00:01:00] That was the title which we’re certainly going to talk about, but one of the pages from your book, actually, I think was interesting, because it teed up this conversation and said scientists and inventors are repulsed that progress in technology is inevitable and it was really interesting. Serendipitously, I pulled the book out at this point and I put in quotes. It was a cop-out that you mentioned and people think it’s a cop-out or a surrender to non-human invisible forces. What do you think about how are people and what are your thoughts to how people reacting to the word “inevitable?”
[00:02:00] Some people get upset and other people are worried about it. The sense that they can accept that there are certain things that are inevitable, but they’re not happy with what seems to be coming inevitable. They would prefer otherwise and there are those who just object to it and kind of want to refuse to believe that anything is inevitable. There’s kind of people who don’t like it, but accept it and there’s others who won’t even accept it. Then there’s others who think, hopefully I can convince them, which is saying that, yeah, you have to accept it, but we have far more degrees of freedom and far more freedom in it.
[00:03:00] We have plenty of choice and that in fact a lot of what we care about is not the inevitable part, but what version of things that we get and that we have enough choice there to keep us busy and to focus our efforts. Like telephone was inevitable, but what kind of telephones we get is up to us and the same thing with tracking or AI. Those are inevitable, but we actually have a lot of choice in the ways in which those forms come to us whether their open or closed commercial, non-commercial, international, parochial. We have a tremendous amount of choice even though the larger forms are inevitable, that that choice shapes what we think about them and gives us plenty of opportunity to exert our will.
[00:04:00] What I find most interesting about the word “inevitable” is the end reading through the chapters. I feel like the human brain almost has to be upgraded as you read it, almost to be able to think about the incomprehensible or think about the future in the way you do this continuously. For a leader to look at this, for someone to really engage with this thinking, do you feel that there has to be a different, almost like a radical way of using your own brain?
Kevin: Yeah. I’ve never thought about that as possible. A great example of it is like if you look at demographic trends. I mean they’re inevitable on a certain timescale meaning that there’s not much we can be able to do to change the next 20 yeas in terms of the number of people on the planet. You have to accept that a certain level and then work with it. Maybe that brain thing is sort of where you were … it’s kind of like what a lot of leadership is about as you accept the things that you can’t change and then you focus on the things that you can. It’s like that old St. Francis prayer, I think.
Bill: Sure, yeah, for sure.
Looking for wisdom to be able to tell the difference between the 2. Leaders have to kind of … you have to accept certain things and you have to … you try your hardest, but at a certain point you realize that there are certain things that you have to be realistic about them, you can’t just deny them. Once you accept things, then you can hunt for or find the levers that you do have control over. I think that’s sort of what I’m talking about in technology is the same way, but there may be more of technology that is inevitable than we first thought.
[00:06:00] Maybe one of the pieces is that I think maybe the brain or the way we think about some of these inevitable with picking on AI for example is we’re looking at as a natural threat, because of more of our inclined wiring. What I found was really interesting listening to you at the RSA Conference is that you talked about really the ethics, like possibly this a great way to really get our own ethics and our own governance in order as we programmatically set up AI. Can you comment a little bit to what your thoughts are about that?
[00:07:00] Yeah, I mean my basic kind of assumption default premises that humanity is something that we have invented over time that we’re self created and we’re ongoing and that we’re still creating. Even our morality and our laws and ethics are basically something that we’ve been working on or creating over time. That we’re still in a very primitive stage of that and we think of ourselves as highly evolved ethically, but actually I think we’re fairly shallow and we don’t have a very deep or consistent articulation of it.
We can teach and need to teach our AIs morality and ethics, because they are making decisions and sometimes these are life and death decisions like if you’re driving a car or making a decision in the medical or if we have … be it drones or soldiers, these are important decisions. We have to give them kind of the ethical morality bearings that we have. We can do that if we were very clear about what ours were. It turns out that when we sit down to try and convey this in a program, we realize that our own ethics and morality are very, very shallow and inconsistent and not very well developed.
[00:08:00] As we try and teach, it’s like teaching our children, we realize we’re not very good and that process of teaching actually is making us better, because it’s forcing us to go another level deeper. It’s forcing us to be even clearer. It’s forcing us to go wider and become more consistent. Over time, as we try to develop the ethical systems of the AIs and robots, we ourselves will become better. We will be able to teach ourselves better. We’ll be able to be more aware of it. We’ll be more articulate. We’ll be actually better ethically as we try and to teach these young children how to do it.
[00:09:00] Yeah. I had an interesting … I think it spurred from the conversation, again, at this conference at the dinner table with my own kids. They were … they’re not technically inclined at the moment. Then they were saying, “Dad, we’re never going to get involved in artificial intelligence,” and I explained the driverless cars and how if there’s a driverless car sees a human being in the road in front of them and it’s programmed to swerve away from a human being, but then on sidewalk there’s 3 human beings who’s going to set that and make that decision. I was to make it clear that the programming ethics and the philosophy around the debate is probably going to be as equally as important as the technology itself.
Kevin: Absolutely, yeah. Again, we give us, humans a past. We don’t require us, actually to make those decisions. We just say, what’s up to the moment, we didn’t have time to think we’re not responsible, but the AI is kind of working slow motion in a sense. We have to actually give them a decision, because they will think through all that and so we won’t give them a past. We can’t give them a past and therefore, in some senses they’re going to be more ethical than us, because, again, we give ourselves a good excuse to not have to think about these things.
I would love to ask you a couple of questions about the rising complexity. This is really built upon your previous book and you had mentioned in 2010 that you thought … I forget the exact statistics, but the world of connected devices that the earth was connected with both routing and switching in servers and such in the internet of things, which roughly the order of magnitude of one human brain of complexity. I wanted to ask you, where do you feel the complexity of the connected world is right now in relation to that?
[00:11:00] Yeah. Well, I did revisit those numbers. It’s many orders of magnitude greater than a human brain. It’s multiple brains and there was a … I think it was 2029 or 2030 or something when the complexity of all the machines connected would equal the complexity of all the humans connected. All the human neurons, which took 7 or 10 billion humans and you added all those neurons and you have this sort of global brain of all the human neurons.
Kevin: We’d also have a similar number of transistors connected as neurons. Something is going to happen when that happens. I don’t even know what.
Bill: Well the thing is-
Kevin: That’s, you know, you cannot connect a gazillion transistors together and not have something happen.
[00:12:00] I find that your chapter on filtering was interesting, because as the complexity rises and this is impacting for business decision makers, IT security. It’s really beyond the human beings from being able to help what we’re doing. We’re going to need some help from either AIs that we program or which I guess, we translated in the machine mind you, but at some point we’re going to need some help, because you even quoted Herbert Simon, that won the Nobel prize, a social scientist. You said in the world of abundance the only scarcity is our attention. I’m just curious, how do we get attention even in that much complexity?
[00:13:00] Yeah. Well, this is slightly scary, but the answer is that there will be lots of stuff happening that will not have our attention and/or understanding. I think the thing we still don’t really appreciate is the extent to which a lot of AI will be happening and we simply will be inherently unable to understand what it’s doing and its decisions. There is one hope of a technological solution to this, which is that in a certain sense there might types or I should subsets of AI whose role is to reveal the thinking processes of another AI. In the certain sense, we could think of our own self consciousness as that.
[00:14:00] We make decisions and stuff that could be opaque to us, but we have this other system that we try to use to access how we think. It’s actually kind of a separate process. We might put into AIs more of this self consciousness in order to help us figure out what it is and how they made a decision. The problem with an AI is that it could make a decision that say was prejudice and we wouldn’t even know it. It might not even know it. It wouldn’t know it. We would have difficulty even determining that. We probably will start to layer in these other forms of consciousness in order to help us access the thought process, because right now the way it works is that we don’t have any access to it.
It’s not aware of it and so it’s becoming more and more complex and we don’t know how it arrives. There’s like a famous mathematical proof that has like a million steps in it. There’s just simply no human that can hold that in their brain so we have to accept the mathematical proof, the computer proof as either valid or not. We can verify each step, but we can’t take the whole thing as a whole.
[00:15:00] There might be AIs that we developed, whose whole job is simply to help us ascertain or probe other AI decisions say and that you could think of it as kind of a consciousness or self consciousness. I think this is a huge area that we’re going into, which is that we will have difficulty accessing. If we get to the planetary scale of all these things, there’ll be difficulties we have just of the scale of which these things are happening. We might be able to develop tools that would allow us to get some better understanding of what’s happening, because, I think, without them we have no hope.
[00:16:00] Maybe like you use the example of Amazon and many within that chapter, but maybe you have … Amazon is able to select based on preferences or based on your previous selections, but maybe there’s an AI that it gets programmed, it’s like a personal assistant of some sort or that you could kind of upload or higher orders of thinking or more complex thinking too, is that sort of what you’re thinking?
Kevin: Yeah. Excuse me. We’re running these very large systems on a kind of planetary scale right now. I think one of the things we’re going to import into say the internet and trunk lines and services; this idea of immunology that you have an immune system and that that way you deal with malware or hacker or break-ins and other kinds of stuff is that you have an immune system which is working as not any local … it’s something around sort of like a local server is being run in the system itself.
[00:17:00] There’s kind of global cooperation necessary to run these things that can ascertain whether there’s a foreign body and have a memory about foreign bodies and work in a way to eradicate or keep it to a minimum. You can’t ever … Unless the immunology thing is nothing gets to 0. That you have a presence that you tolerate in a certain level and you keep it there. There’s a kind of a biological model for dealing with these things and intellectually your intelligence is another thing too, which is that you … It’s learning, it’s adapting. We may not understand it, but we might have these modules or another layer that would try to monitor, self monitor and that’s sort of the equivalent of the self consciousness.
[00:18:00] I think a lot of this is happening not so much to the consumer level, but at a higher infrastructural level. There will be consumer facing AIs like your Amazon Echo Alexa, where you’re having a conversation with it, maybe they could represent you. You can have an agent that’s only working for you, but a lot of this is happening at the systems level.
Bill: Yeah, what’s interesting … It’s funny you brought up this point about immunology and sort of like human body equivalent of when we have an infection, the human body just automatically deploys to the site of infection and hopes to heal. I watched you give a YouTube presentation on this. I couldn’t find it.
I had listened to it about a year ago and I couldn’t find it subsequently, but it’s stuck in my brain, because I brought someone on the podcast about 9 months ago who talked about STIX and TAXII, which is really information sharing protocol between the federal government. We get calls, our teams, from the FBI letting us know that XYZX customer currently has an exotic attack going on, which they don’t even know about because it’s a 0-day.
[00:19:00] We sit here going, why didn’t the FBI just share that information while there are methods of trying to get the federal government to share with commercial, because commercial is sort of already has a lightweight sharing going on between some of the Silicon Valley companies that are like Palo Alto and some, but it’s a lightweight sharing.
They’re not sharing all signatures, but I can imagine as you … Even in one of your chapters is on sharing, is that this ultimately will be shared so that federal government data, commercial data like the top heavyweights like Palo Alto and Cisco and such are all sort of enmeshed, but that’s being ingested back into the firewalls and antivirus signatures so there’s a symbiotic deployment and so when there’s a DDoS attack somewhere then we can automatically deploy signatures to mitigate it. It’s a very interesting thing.
[00:20:00] Absolutely, yeah. I mean to me that’s inevitable and I would say that any sophisticated global civilization would have one of those things operating. There’s political challenges that stand in the way, but technically I think enough is known about how to do this today that technically could be accomplished with some additional innovations and management tools and safeguards and stuff like that.
[00:21:00] I think it has to be more algorithmic, self-learning system to manage these things and it does require global cooperation, because it has to work everywhere or it doesn’t really work. I think the biological models, the learning models and then there’ll be a layer of AI in this which will be essential and maybe that’s the main tool that we’re waiting for is to have this intelligence that can recognize a new signature for what it is or in some ways have a protocol for what needs to be done so that you’re not having humans involved. Humans are awake or not awake waiting to act. You have legal exit, which is always on.
Bill: Yeah, right, in the background.
Kevin: In the background.
Bill: You distinguished between AI and machine learning, is machine learning is basically a more commercially-ready, how would you explain it as it is?
[00:22:00] Yes so AI is what we can’t do yet. As soon as we can do it, we call it machine learning or expert systems. It’s a ridiculous thing. It’s basically AI, it’s all the things we want to do and machine learning and expert systems and all the kinds of things that we can do. As soon as we do it, it’s no longer AI, it’s something else. AI is always going to be receding in front of us, but yeah, I mean anybody who would call Siri or Alexa AI 50 years ago, but now it’s just something else.
Bill: That’s one of the big parts of your book, you talked about as the differentiation between now, the now versus the future. You were talking about, I think this is one of the pieces that makes people very uncomfortable is they’re constant newbies, because of the speed of which change is happening with the technology or maybe you can try or better for you to explain what you meant by we’re constantly being newbies and then now?
[00:23:00] Yeah. There’s no surprise to anybody at all that we have this accelerated change, which I think will continue despite the fact that most of the technology in our lives if we look at it kind of quantitatively most of the technology in our lives is old stuff. It’s stuff that was invented before you were born. It’s wiring switches, electrical plumbing systems, roads, concrete, that’s most of the stuff. We tend to view technology as anything that was invented after we were born, but it’s actually everything.
[00:24:00] That was invented before we were born and after we’re born. The stuff that was invented after we were born, the new stuff often doesn’t work very well either, it takes awhile for it to kind of figure out what it does and what its best role is and how it works. We have a real focus on this kind of current stuff, but most of what surrounds us is old. However, the new stuff preoccupies us and that will continue to keep changing very, very fast and in order to use it, we have to keep learning it. We think we’ve mastered the phone, but the phone is going to be replaced by something else, VR or something else coming.
We have to learn another set of gestures, another set of interface queues, another set of programming languages whatever it is. There’s no rest basically from this river of new things coming. This proper stance, really, is to understand that we will forever be newbies, the person, the clueless newbie who has no idea what things mean, who does not understand the slang or the jargon, who is a little embarrassed because there’s people around who seem to be very fluent in it, but that’s just going to be the state for everybody.
[00:25:00] You can’t be experts in everything and the new things coming along will have no experts and so we’re all be kind of slack job, clueless people and that’s okay, that’s just sort of the standard default position. If we kind of accept that then we can go through the process of learning yet another thing. We shouldn’t think, well, you know, I can post now, because I know how to use a laptop, I know how to use a phone. It’s like I’m set. No, you’re not set because in 5 years there’s going to be a whole another regime and you’re going to have to learn all this new stuff again.
Bill: We always have to be in a continual state of learning, almost a perpetual learning in many respects. There’s no-
[00:26:00] Right, yeah. They talk about life learning, yeah, that’s exactly what we’re talking about and for many ways your learning begins when you graduate. Yeah, but I do think that there are some general skills. I call them techno-literacies. They are general skills about how technology in general works that are very useful and can persist over time just with basic notions. Personally, you will be a newbie and that every new thing you take into your life has not just monetary cost, but has a maintenance cost as well.
You start and want to be more selective and you can’t use everything. There’s too many choices so you have to start. We’re curating so to speak the technologies since we can’t use all of them ad we have to actually select certain ones. There’s going to be meta-skills in terms of selecting which technologies we want to keep and which ones we want to maintain, because they all need maintenance et cetera. I think there are kind of uber-technological skills that are helpful to have, but they’re not baked into them. They’re not specific to a particular kind of technology.
Well I think you sort of hit on that a little bit, because there’s too much information. You got to have that skill of selecting.
Bill: Now if you owned a company, let’s say had a 1,000 employees and you had a team of 5 business leaders, what would you say to them as far as how you would want them learning and how you would want them understanding these shifting that’s going on so that company remained competitive and maybe folks on the business side, the business IT leadership and the business marketing leadership? How would you want them to think about the future?
[00:28:00] It’s a really good question and of course very, very practical. I like what Marc Andreessen has often said where they like to pay attention to, which is, what are all these 1000s of people doing on their day off and their free time, because I like to look at where technology is misused, abused and unsupervised. It’s just basically how the criminals, the kids, you know children in the street uses it, because I think there you kind of get a little bit sense of where it wants to go, what it wants in a certain sense, because it’s not really being dictated by what people should be doing, what the believe they’re supposed to be doing, but actually, it’s kind of being released a little bit to try and go its natural way.
[00:29:00] In that sense, also, I co-filed a free look where people are using it for free to see we’re for monetization, about monetization, which will give you a little bit better idea of where things are going. I recommend that they pay attention to and try to what I call listen to the terminology to see where it wants to go, what these ingrained tendencies are. For instance, for 30 years the music and Hollywood industries have refused to pay attention to the fact that things want to be copied inherently, you can’t stop copies from flowing. There’s been this misplaced emphasis from them on copy protection copy.
[00:30:00] Loss and blah, blah, blah, they have lost 3 decades of their own advancement by refusing to understand or acknowledge or accept the fact that they can’t stop copies from falling around. This was evident 30 years ago very clearly to most people online, which is this technology wants to copy stuff and so you can’t stop the flow of copies. I think there are similar things happening now from tracking and AI and stuff we were talking that some people will just not see or having seen refused to accept.
We’ll try and prohibit or block or stop or turn down or turn away or whatever. I’m preaching something different like look at the edges, because that’s where the center was going to go and listen to what is happening in these place where it’s unsupervised or where it’s happening before monetization, because that’s going to give you a much better idea of where the main thing is going to be happening. Look at the margins, because the where the center is going to go.
Bill: Edges in the margins, you basically tell them to look there where it’s unsupervised before monetization.
Right, right. If I was in IT and was just like, what are people doing with all this kind of stuff on weekends, what are the bad guys doing with this, where do kids give their attention, you know Pokemon.
Bill: Sure, absolutely.
Kevin: It’s like Pokemon is like the future and not the game itself. It’s not the game, that’s not going to last very long, but that thing that’s happening just convergence, this mixed reality, that’s big.
[00:32:00] Yeah, I know. I think that’s perfect and that’s one of the questions I had asked there earlier was about almost the neuroscience in the brain and the thinking. It’s like one of the best questions to ask so that’s a great answer, I appreciate that. I think the audience is going to love that piece. I was curious also from a kid’s point of view, because I have 3 kids like you do. Mine, are a bit way younger, but what kind of skills do you think that kids should or parents should really be encouraging their kids to have as they grow up to be able to be in a best position to take advantage and value and see these edges for themselves?
[00:33:00] Yeah, well I think if you want an answer, you ask a machine in the future. I mean the answers are going to come from machines so that one skill is you want to train kids to ask really great questions. There’s all different kinds of great questions, but that’s questioning exploration, which exploration is a type of question, innovation is a type questions, what if, what if this, what happens if that, although if I do that. These are all kinds of questions. I think a good question is much better than knowing an answer. I think the other kinds of skills are … By the way, questioning authority which is the sub thing of every American Hollywood movie, you know the rebel who questions authority.
Kevin: That’s still very, very valuable as questions, assumptions, but you have to do it and there’s a certain way of doing it, because most assumptions actually are true. The things everybody knows, most of those are true, but there are some that aren’t.
[00:34:00] There’s a kind of graceful questioning of authority. Maybe questioning what everybody knows, but you have to be realistic about it, because a lot of what everybody knows is true. New York, London is capital of England, but I do think this idea of being able to challenge in a graceful way and question things is a huge thing for innovation, for discovery, for creation.
Technologically, I think that being a newbie, lifelong learning and then the other thing that’s an essential element that other cultures have a little bit more difficulty, which is this idea of failing forward, of using failure as a mode of learning, of trying to keep failures fast and small rather than big and traumatic. It’s kind of like what I could call failure management, learning how to fail usefully and productively.
It’s not as they’d rather fail, but actually as you do it, you can do it fast and quick as an experiment versus a stigma.
[00:36:00] Exactly, right. It’s a tool, failure as a tool, as a means of learning as method of progress. Particularly, for overachieving kids that’s a big thing to master failure, because to love and haven’t experienced failure in that way and sometimes it will hurt. It’s like sports. You’re going to trip and fall and it’s going to hurt, but that’s just sort of part of it. I think other kinds of skills besides learning how to learn, which is to me is just the uber-scale. You’ll actually have something to say about that. It’s not just learning how to learn. You can imagine that, but here’s the real trick. I learned this from actually Tim Ferriss. It’s learning how you learn.
[00:37:00] It’s, we all learn differently and what you actually want to be able to do is to understand how you learn best so that while you’re learning all your life you’re doing it, because you have figured out how you learned best and that varies tremendously between people. Coming to that is pretty challenging, but that’s the thing that you want to know, because we all learn differently. Of course, we learn different things differently as well so it’s a very complex answer. If you can get there and if you can get there early, let me put it that way, you’re way ahead.
Bill: That’s great message. I love that. I appreciate you further for spending some time answering that, because I think that a big part of this is going to be learning and being open to learning and training kids is getting different than adults learning new skills, which is interesting, because one of the pieces you had mentioned about is I think was quantification section of your book. I wanted to pick on quantified self, because I know you’re really deeply involved quantified self movement. I think you might have even invented the word, but I’m not sure exactly, did you?
[00:38:00] The word came out of a conversation that Gary Wolf and I, the founders of Quantified Self had in a walk where we were kind of inventing the movement. I think the term, I think he first suggested the term and we both immediately acknowledged that it was the right thing and we started to spread it. I think the credit probably goes to Gary, but we both tried to foster by naming this little meet up originally called the quantified self saying, posting on the internet saying, “If you’re quantifying yourself or you think you are, come to the first quantified self meet up,” and we just wanted to see who would show up.
From that, actually, coincidentally, Tim Ferriss showed up on that very first meeting and he was doing things that were way beyond anything we imaged or anybody was doing at that time. From that, there has been many thousands of meet ups later around the world. People doing a show and tell, telling about how they are self tracking and developing the new tools and the new technologies and the protocols and the philosophy around that.
[00:39:00] I don’t attend as frequently as I used to. Gary is kind of running the organization, but I do try to keep up with what’s happening. I can say pretty confidently that anything that you can possibly imagine that’s quantifiable, somebody in the world today is tracking it.
Bill: Yeah, that was a fascinating chapter, Kevin. I have a short to story to tell you about it. I actually had a concern about my health and I went to a doctor, she really didn’t know what to do with blood tests and urine specimens and all of this. I went to another doctor who then prescribed and basically sent out the results. He said, “I don’t know why you’re here. You’re young, you’re in your 40s, but we’ll send them out anyway.” He sent him that. There was truly, there was a bunch of problems that he didn’t know how to solve. I had to go find yet another doctor to figure out how to solve them, which did.
[00:40:00] What’s interesting is that then I was training for an Ironman and I had on my watch my blood … I’m sorry, my heart rate and then I had my power outage on the bike talking to me. Finally, halfway through the Ironman, I was like all this data was coming at me. I said, “You’ve got to just compete. You got to put away the data and compete.” I found the data incredibly helpful, but then there was a time in which I had to get in there and just get it done. I had to rely on my feeling. That’s my question is where does feeling come into play versus the data and how do you come up with the results?
[00:41:00] Yeah, yeah. First of all, yeah, humans, our brains are not at all built to processes data or statistics or anything like that. Our brains are just really lousy at it that’s why we had these artificial intelligences and things like calculators and statistics programs. First … The real challenge in quantified self is actually finding meaning in all these data so it’s actually not that hard to generate the data to collect it, to store it, but to extract meaning from it particularly real time as you were trying to do in a race. It is very, very difficult and challenging.
However, there are some kind of interesting things about it. One aspect of where you want to go with this, is we don’t actually … Again, we want to have to data and we want to turn that data into a new sense, into like a sensory thing. You want to be able as you say to feel it rather than to have to see all these data. You want to kind of know it in your body. I think it talks about it in the book, this really cool hack. This guy did of a digital compass. He turned it into a belt so you could feel north rather than see it.
Bill: That was a great story.
[00:42:00] It became a new sense. He invented new sense. He was sensitive to northness, to the current directions and he had an intuitive sense of where north was. He could just point to it, but that actually transferred into almost a better intuitive sense of a map of where he was geographically, because he always knew where exactly north was. He got a new sensation of spatial navigation.
There’s an experiment with people like I know a guy who’s doing a vest so you could turn sight into feelings for blind for people so they can actually feel where they were. They had a camera that’s looking and they feel in their chest. This sense, this ability to take a lot amount of data and turn it into a new sense and so you actually want to be able to feel your glucose in your blood.
Kevin: You would be able to feel them rather than just have to look at data. I think the data, is this is the 1st step?
Kevin: The 2nd step is actually to make new senses out of it and that’s where we’re going with that. Then 3rd one would be kind of to do in a real time and then to learn how to absorb a new sense and to master a new sense and so I think you’re right that we’re kind of just in early stages and we’re just dealing numbers, which we’re really terrible at dealing with.
Bill: No, I love your answer. It’s fantastic. It was a really great fun chapter to read. I know we’ve got to get wrapped up here quickly and I wanted to just ask one last question that was related to really people playing offense within their life from within business as it relates to the technologies you talk about. You talk about augmented reality, virtual reality, sharing … you just talked about a lot of different concepts throughout the book.
[00:44:00] If someone wanted to have an idea and they wanted to take action on it, would you recommend … but they didn’t have a team in place, they knew the direction, but they just needed to supplement experts. Would you be able to inclined to look at crowed funding? Would you be inclined to look at Elance platforms or a place where you can find experts? How would you counsel someone to go through like the first 3 steps of getting access to a team of experts that you weren’t necessarily going to bring them on as employees?
Kevin: Well, because Elance has changed their name, I’m trying recall.
Kevin: Upwork. I was using a lot. They’re very, very good for this kind of prototyping the speed. Don’t think of them necessarily as cheap or think of them as faster and temporary so you can actually prototype.
[00:45:00] Yes, I think that’s a wonderful tool of being able to outsource freelancers and the challenge there is specifying what you want done. If you don’t know what you want done, that kind of outsourcing is not really useful. You can do experiments, but you have to be able to specify the task. If you can do that, you can outsource very fast. I think it kind of depends, I know there are a lot of creative people who host, they call workshops kind of prototyping workshops where they bring together friends or whatever, people that they know of different skills and they’re coming together for a day, they have pizza and they’re doing something and this is just to try something out.
[00:46:00] Not knowing more specific what you have in mind. I don’t know, but there are certainly a number of different ways to do this. The thing that I adhere to is the IDEO; this is a design company that they call the design approach to life. The basic premise there is that you don’t think about things, you do things that you learn more by doing very, very rapid prototyping, like within hours.
Kevin: Instead of trying to think your way through, you do and that you learn much more by doing first before thinking. It’s kind of counter-intuitive, but it just says, okay, you try something fast and that will tell you more. Then you can think on that and then do another thing so that you are prototyping from the very, very beginning and that that’s a better way to think about problems than it is to think about them.
[00:47:00] Prototyping from the beginning, it sounds like my son in the paintball game this weekend where he just kept on shooting and shooting until he found me and then he scored [crosstalk 00:47:00]
Kevin: Exactly. Yeah, right, but no, I mean, it’s like if you’re going to invent a paintball machine, the idea is like you’re not sitting down doing sketch after sketch. You’re actually, you’re going to go into the workshop and you make something just to see if it all can work. You go from prototype to prototype. It’s not that you don’t do thinking, but you do it after you’ve done some prototypes.
That’s true, but whatever skill you’re talking about. You can fake things too, I mean in the prototyping, the whole thing about prototyping is that you can often do things manually and then you can automate them later, but you do it manually to see if it even works or if anybody wants it or whatever. The prototyping doesn’t mean that it always has to be a mechanical gizmo. It just means that you are doing something.
Well, Kevin, I want to thank you for your time, your wisdom and your ceaseless creation of books and it’s been fascinating. I love your new book “The Inevitable” and I highly recommend it. Is there a way that people can reach out, read more your materials or preferred website of Twitter or such that you would prefer people to reach out to learn more about you?
Kevin: Yeah, I have a pretty big extensive at my initials K-K. O-R-G, org, kk.org and there’s a book page for the book, well the translations for those who want other languages and my other books as well, an my blogs at which there are many including cool tools which is this place we review one recommend tool a weekday.
Bill: Cool Tool was just fascinating and that book was fantastic.
Great, well, thank you. There’s a Best of Cool Tools Book, which is named at young people to share them incredible possibilities that they have of doing stuff themselves to make things or make things happen. Then there’s … I have Silver Cord Graphic Novel. We have a collection of long term forecasts for the future. Anyway, so all of that is at kk.org and I’m just kind of active a little bit on Twitter which is Kevin2Kelly, the number 2, Kevin number 2 Kelly, I know a lot of Kevin Kellys so I’m Kevin2Kelly.
Bill: We’ll link this up in the show notes and are you hacked up on Google Hangouts as well, did I hear?
[00:50:00] Yes, I am. I do post and actually I do a lot of reading. I have people I follow that I find very interesting. I think Google.com is a little bit much more long form intellectual. It works for me in terms of discussing and hearing about kind of technological developments. I do post some tips and articles and links and stuff on Google Plus. I’m Kevin Kelly or Kevin2Kelly there as well.
Bill: Well Kevin, I very much appreciate your time today and the message you’ve given to our listeners.
Kevin: Well it’s really been a pleasure discussing everything with you Bill and I appreciate the attention and enthusiasm for my book.
Bill: Thanks Kevin and have a great rest of the day.
Kevin: Okay, you too. Bye-bye.
How to get in touch with Kevin Kelly:
- Cool Tools Podcast
- The Inevitable: Understanding the 12 Technological Forces That Will Shape Our Future
- What Technology Wants
- Cool Tools: A Catalog of Possibilities
- Full List of Published Books by Kevin Kelly
- Blog post 1000 True Fans
- TEDxTalk 12 Inevitable Forces That Will Shape Our Future
- Interview for London Real
- Interview with Tim Ferriss
- Interview with Lewis Howes
- Interview with Chase Jarvis
This episode is sponsored by the CIO Scoreboard, a powerful tool that helps you communicate the status of your IT Security program visually in just a few minutes.
* Outro music provided by Ben’s Sound
Leave a Review
If you enjoyed this episode, then please consider leaving an iTunes review here
Click here for instructions on how to leave an iTunes review if you’re doing this for the first time.
About Bill Murphy
Bill Murphy is a world renowned IT Security Expert dedicated to your success as an IT business leader. Follow Bill on LinkedIn and Twitter.