Appendix D
Transcript of the Workshop

8:30 A.M. – 10:30 A.M.


Due to a recording issue at the start of the morning the introductions of some attendees are not shown here.

SCHWARTZ:

…Forbes, a student of Paul Saffo, a member of the committee, and I spend most of my time in GBN, helping organizations think about the future in a great variety of ways. And what excites me about this day, frankly, is we’ve been at this now for a couple of years pulling all the pieces together and making sense of it.

MANSFIELD:

I’m Carolyn Mansfield. I’m a new addition to the Monitor 360 team so I’m excited to be here because Derek’s had me doing a crash course on everything you guys have put together so far and I’m just excited to see the ideas that crystallize out of the day.

McCORMICK:

I’m Mike McCormick. I’m with McLiera Partners and we basically help companies use disruptive technologies in the marketplace to gain market share. I’m excited today about basically seeing different perspectives. I think one is a good friend of mine who’s got a great definition of wisdom, which is being able to see the same situation from multiple perspectives simultaneously at a time and I think this is kind of an interesting opportunity to be involved in.

DREW:

I’m Steve Drew. I’m a member of the committee and a consultant of the pharmaceutical and biotech industries. What excites me is the role that biology will play in every aspect of our lives, all of the technologies, all of the directions that we go. And I’m seeing ways in which that’s coming together.

LONG:

I’m Darrell Long. I’m a professor of computer science at University of California. That’s my day job. I also spend a lot of time working with the government Department of Defense and intelligence communities in particular. And I used to be a member of the TIGER committee involved here. What excites me about this is looking at technologies coming not just from my discipline but from other disciplines, physics, engineering, biology and seeing how they can come together and try to understand what might happen when these things come together.

VELOSA:

Hi. I’m Al Velosa from Gartner, another market analyst trying to, and I think failing, as somebody else said, to forecast technologies markets and all sorts of good things like that. But it’s a really fun activity to do. And what I’m really excited about actually is to



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 83
Appendix D Transcript of the Workshop 8:30 A.M. – 10:30 A.M. Due to a recording issue at the start of the morning the introductions of some attendees are not shown here. SCHWARTZ: …Forbes, a student of Paul Saffo, a member of the committee, and I spend most of my time in GBN, helping organizations think about the future in a great variety of ways. And what excites me about this day, frankly, is we’ve been at this now for a couple of years pulling all the pieces together and making sense of it. MANSFIELD: I’m Carolyn Mansfield. I’m a new addition to the Monitor 360 team so I’m excited to be here because Derek’s had me doing a crash course on everything you guys have put together so far and I’m just excited to see the ideas that crystallize out of the day. McCORMICK: I’m Mike McCormick. I’m with McLiera Partners and we basically help companies use disruptive technologies in the marketplace to gain market share. I’m excited today about basically seeing different perspectives. I think one is a good friend of mine who’s got a great definition of wisdom, which is being able to see the same situation from multiple perspectives simultaneously at a time and I think this is kind of an interesting opportunity to be involved in. DREW: I’m Steve Drew. I’m a member of the committee and a consultant of the pharmaceutical and biotech industries. What excites me is the role that biology will play in every aspect of our lives, all of the technologies, all of the directions that we go. And I’m seeing ways in which that’s coming together. LONG: I’m Darrell Long. I’m a professor of computer science at University of California. That’s my day job. I also spend a lot of time working with the government Department of Defense and intelligence communities in particular. And I used to be a member of the TIGER committee involved here. What excites me about this is looking at technologies coming not just from my discipline but from other disciplines, physics, engineering, biology and seeing how they can come together and try to understand what might happen when these things come together. VELOSA: Hi. I’m Al Velosa from Gartner, another market analyst trying to, and I think failing, as somebody else said, to forecast technologies markets and all sorts of good things like that. But it’s a really fun activity to do. And what I’m really excited about actually is to CD D-1

OCR for page 83
CD D-2 Persistent Forecasting of Disruptive Technologies – Report 2 look at somebody else’s, actually just looking at markets with this set of talent because I always learn something from talking to folks like you. TWOHEY: Hi. My name’s Paul Twohey. I’m a recovering academic so I’m now an entrepreneur and so I used to work at Palantir and now I’ve got a startup that we’re hoping is going to disrupt some markets ourselves. And I’m kind of excited about getting a glimpse into the future with some really smart people and making sure it turns out right. LYBRAND: Hi. I’m Fred Lybrand. I’m on the committee. I run the U.S. operations for an advanced textiles company that’s headquartered out of Europe and have started a company around food safety and nutrition using IT perspectives. And similar to Peter, I’m enthused about the opportunity for synthesis in a lot of the ideas that we’ve been talking about for almost two years now. ZYDA: Hi. I’m Mike Zyda. I’m the founder of the Games Program at USC, the Director of the USC GamePipe Laboratory. I’m also advisor to five startups, probably the most, two most exciting are Emsense, which is a brain sensing, human emotion modeling company, which now has offices in San Francisco, New York, Chicago, London. We started this in 2004. It's growing real quick. And also Happynin Games, which we founded in September. My brother is involved in that. And I hired 15 of my own students from my own program, which is pretty fun. How does my professional work link to this topic? I’m just kind of a disruptive kind of guy and maybe you need – [General laughter] – someone like that, and I just -- So what I typically do is I go and do what makes perfect sense to me and I just go make it happen. And this is, you know, I tried this in a military school. I was at the Naval Post-Graduate School for 21 years and founded the largest cross- disciplinary degree program there at the MOVES Institute. Built a hit game inside of the school, America’s Army, through its almost four million registered players. No one told me you’re not supposed to build a hit game, build an operating hit game inside of a university but what the heck, I just do what I feel compelled to do. I’ve also helped found a nonprofit in the last year called The Fight Against Obesity Foundation and it is sponsored by Steve Harvey, the comedian, if you know that. We’re just about to buy a building in Inglewood, California, to support a group that encourages proper diet choices and fitness. Anyway, what excites me about this meeting? A lot of interesting people, San Francisco’s fun, Gilman Louie, of course. You know, I always like to come to his meetings and listen to, know what he has to say and so I think it’s lots of fun to talk about the future. I think it’s really hard to predict the future. I think it’s, the future just happens and I think sometimes you have to just jump from what you’re doing and go to the next thing. So I got to do that. I quit my tenured full professor job on my 50th birthday and took a new position at USC and founded a game program. So that’s the kind of guy I am and that’s why I’m here. GOLDHAMMER: Thanks, Michael. Philip? WONG: I’m Philip Wong. I’m with Walt Disney Parks and Resorts. I’m the director of Business Planning and Development. I have a small team that basically looks at any sort of strategic issues and population actually has, so these can range from issues around technology, they can also range from capital restructuring. So basically – and also forecasting and planning. So we cover a whole range of issues all across the company. The reason I’m interested – I’m going to do this the other way around. Before I actually joined Disney I was in technology for about close to a decade, started off my career at NASA, worked at actually Hughes Communications, Inc for a while, designed a satellite Transcripts were not edited.

OCR for page 83
Appendix D CD D-3 system for the ICO Global Communications, which was a mobile satellite communication system. Didn't fare so well. Realized the business implications in that. SCHWARTZ: Nor did Iridium. WONG: Nor did Iridium but it was a great technology. And then joined a couple startups and we actually took one startup that I worked on, which is an IP company, CallWave public, a number of years ago, and so really sort of enjoyed working in that environment, which was very disruptive in terms of the technology that we were looking at. And what I thought was fascinating about that was that the disruption in the technology field came from sort of the down market and not necessarily the up market, which is the performance, sort of the performance aspects of the technology. And so I’ve always been, even what I do now I think we’re all constantly looking and being careful about what could disrupt our company’s business. And so I’m a firm believer in a Christensen sort of framework for disruptive innovation and so just very excited to be participating in sort of a forum where we can actually discuss disruptive technology. GOLDHAMMER: Great, thank you. Rich? GENIK: I’m Rich Genik from Wayne State University School of Medicine. I’m the Director of Emergent Technology Research there. We mainly are dealing with neuroscience and neuroimaging, looking at, trying to do two things at once, which I was reading and talking so I didn't do too well there, like talking on a cell phone and driving a car. Being from Detroit, we got a lot of support from the auto industry, used to have a lot of support from the auto industry. [General laughter] What I’m excited to be about, be here today is looking at approaches to predicting future disruptive technologies that are non-Delphic models and also the difference between forecasting and predicting and to be with a group and participate in looking at those specific items. GOLDHAMMER: Great. WINARSKY: I’m Norman Winarsky. I’m on the panel as well. At SRI I am responsible for launching ventures in licenses from SRI, disruptive technology opportunities. I’m excited because I’m going to learn from bright people. GOLDHAMMER: Good. Jim? O’CONNOR: Hi. My name is Jim O’Connor. I think the most relevant experience from my past is the fact that I was at Yahoo! Finance for seven years as the Director of Product Management, spending most of my time figuring out how to manage large sets of data, translate them and display them in a very easy to consume fashion, not necessarily to finance professionals but to the average retail investor, as well as working on communities, trying to figure out what kind of intent and how to mine that data so that it would be more helpful to the retail investor. My current position, I’m a partner at a small company called Mondia down in Mountain View, where we’re a startup incubator/accelerator, helping small startups move from the idea stage to reality as quickly as possible. I think I’m most excited here – when I went through the bios I realized I’m the least educated person in the room probably, which is really exciting for me 'cause I enjoy being, you know, not the dumb guy but the least educated person. [General laughter] Because I know I’m going to walk out of here tomorrow, you know, or today, smarter than I was when I walked in this morning. And going through all the papers last night, I think the most Transcripts were not edited.

OCR for page 83
CD D-4 Persistent Forecasting of Disruptive Technologies – Report 2 interesting thing for me really is taking this really massive boil-the-ocean project and try and figure out how does it go from where it is, kind of a concept stage, into reality and in particular, kind of what the interfaces look like because there’s, you know, a wide breadth of ideas in what we all went through last night. And then also I think in one of the papers there was a comment that said that it’s very difficult to predict the future but the more you know about the probabilities and the possibilities and discuss them, the more ready you are to react to them if they actually become a possibility. I think that’s something that’s very exciting. GOLDHAMMER: Great, thank you. DOLAN: I’m Phil Dolan with Monitor 360. Apologies for showing up a few minutes late. I do most of my work with Herrick, Feinstein LLP the national securities establishment. What I’m most excited about is not disruptive technology per se but disruptions that cut across domains and how technologies that are small improvements in one domain in fact can be dramatically disruptive in another and vice versa. GOLDHAMMER: Great. Did I miss anyone? UNKNOWN: Gilman. GOLDHAMMER: No. We got Gilman. But now what I’d like to do is pass the baton to Gilman, who is going to set some context for us about sort of what the committee has been doing for the last, I think it’s a year and a half now. UNKNOWN: Two years. GOLDHAMMER: Almost two years? Almost two years, setting some context for what the committee’s been up to, what we’re going to be doing today and what success would look like at the end of the day. LOUIE: Thank you. So as I said earlier, my part-time job is being a venture capitalist so basically what I do is I kind of sit on my butt in my conference room and listen to startups pitch us, usually slide ware. You know, they come on in and they say they’re going to change the world, have a great idea and they throw up a bunch of slides. So one of the things I learned from that exercise is, you know, it’s a very effective system of going through lots of, lots of ideas of which the entrepreneur has done very little work. That’s the key. The entrepreneur has done very little work. And so one of the goals of this exercise that we’re going to be going through today is think of ourselves as a startup and can we come up with our own pitch deck to be able to say before we build anything, before the government goes off and invests whatever large sums of money they usually invest in big systems, you know, think about what are the possibilities and what could this thing look like before we actually build it. So that’s kind of one of the objectives. I mean, another objective is, you know, as in any 1.0 startup, you know, a guy comes on in or a woman comes on in and says, “I’ve got the billion dollar idea. Please give me $100 million.” Sometimes they come in, “Please give me $2 billion to give me the $1 billion idea.” Whatever. The point is, I usually come back and say, “Well, you know, I’ve only got this little bit of money. I’ll give you a little bit of money if you can prove out the concept.” And so one of the exercises on any kind of 1.0 activity, and we consider this kind of a 1.0 activity, is what is the least amount of money, the least amount of energy we could expend to even prove out that the idea has any traction. So this is not about an exercise Transcripts were not edited.

OCR for page 83
Appendix D CD D-5 about building, you know, the system to end all systems in the next 12 months. It’s not the Manhattan Project. But, you know, can we come up with some sort of a framework to think about what the problems are. And so this is airplane ware, which is kind of traditional for any startup. Airplane ware is when you’ve got a meeting with a venture capitalist and you’re flying across the country, as I was last night, you know, I need to put a pitch deck together. I start working on my slides. So what’s good about slide ware, airplane ware, is the latest thinking, good and bad, all consolidated into a single pitch deck, okay? So there’s very little thought but a lot of feelings that have gone into the slide deck, which is kind of what we started off with when we started off this committee, which is we had a lot of hunches, we had a lot of ideas. We wrote a first report kind of looking at the history of forecasting, put some of the concepts together as hey, somebody should think about these kinds of concepts. Most of it is what I call feeling based rather than fact based, which is okay. Any new endeavor, particularly disruptive technologies, starts off with a feeling, hardly ever starts off with real fact and data because there fundamentally are not facts or data to start with. So one of the things we started thinking about maybe was, you know, before we jumped into technology, just think why we have disruptive events. And on the [..?..] of these kind of disruptive activities -- it could be a piece of technology, it could be, you know, not seeing 911, Pearl Harbor, whatever it is that is disruptive -- why didn't we catch it? And then of course whenever you look backwards it’s immediately obvious that you should have seen it. So we came up with kind of our laundry list of what causes these kinds of surprises. So the first thing is not knowing enough to even ask a question, right? When you kind of get smacked up on the side of the head it’s usually because you weren't looking at where the punch was coming from. So not knowing enough to ask a question or you could have asked a really good question but you didn't ask that at the right time. You know, the environment wasn't right for somebody to get good signals or responses or answers out of it. This is my favorite. This is the problem of experts. They assume what has happened in the past is going to happen again, right? I’ve done this 20 years ago. It was a total failure. This young kid is dumber than I am. She will totally fail as well. A lot about mirroring, this idea that somebody else is going to tackle the problem, look at the situation the same way I’m going to do it. They’ll never go down that path. That makes no logical sense. That is totally crazy. A rational person would never do this. One of the things interesting about disruptive tech is rational people don’t make disruptive technologists. Highly irrational, highly focused, somewhat crazy, definitely not normal people. If you were normal you’d probably have a day job and you’d go home, put the kids to bed and enjoy life. If you’re abnormal, you create companies like Oracle, Apple, Google. Information fragmentation. Lots of information around. There’s lots of noise and it’s all over the place and you can’t figure out which is the good information from the bad information, information overload, way too much stuff coming in. I can’t figure out what’s going on. Biased institutions, bias, your own personal bias, bias of the community, dismissed, potential outcomes. And finally, the most important one, is my favorite, came out of the 911 Commission on why we were able, not able to predict it: a lack of vision. There’s also another one I didn't put on here, is dismissing visionaries as crazy, uneducated or not experienced enough to understand what the real world is all about. So we had to wrestle with what is a disruptive technology, you know? Is that something that just suddenly appears on the scene and changes the world overnight or is it something that slow brews for 20 years and is something that changes that has sudden impact? So we came up with these kind of four concepts around disruptive tech and everybody has their own version but this is our committee’s definition. It’s innovative technology which triggers sudden and unexpected effects. It doesn't mean a new technology which triggers sudden and unexpected effects, just saying innovative technology. It could have just appeared on the scene or it could Transcripts were not edited.

OCR for page 83
CD D-6 Persistent Forecasting of Disruptive Technologies – Report 2 have been around for a long time and somebody figured out how to use it in a different way. It refers to that type of technology that incurs a sudden change of established technologies in markets. These have real impact, right? It’s really hard to have something that is disruptive and has no impact so impact is really, really key. It can include technologies that really change the balance of global power. There’s this kind of hats off to our DOD government friends but in many cases technologies have broad impact. They just don’t impact a particular region. They may start off impacting a particular region or a particular market segment but it quickly begins to spread and has global impact pretty quickly, especially these days. Then of course they’re hard to predict, they’re highly infrequent and, you know, there’s lots of factors that make it hard to see it coming. Huge difference between evolving tech and disruptive tech. So Al Shaffer, who was director of plans and programs inside the DOD in 2005, said from the DOD’s perspective there are three reasons why we’ve really got to understand disruptive tech. One is just to be competitive, right? It doesn't matter whether you’re in a corporate environment or whether you’re in a nation state environment from a military point of view. If you don’t stay current on technologies and begin to try to think about how technologies can impact you, you’re no longer going to be competitive in the marketplace. This is kind of obvious to all of us in this room. The U.S. is not the sole keeper, creator and distributor of high quality technologies that have disruptive impact. Pretty important for kind of policy issues, which was in the old days that we’re going to solve the problem by not letting any of the good stuff out. Doesn't make a whole lot of sense because – but now we have a problem, is does the good stuff even get in. And then quite frankly, we need to stay engaged with the rest of the world. Now I’m not just talking about the rest of the world from the defense, military point of view. I think DOD does a pretty good job – you know, nobody’s perfect – but does a pretty good job of understanding what I call, you know, what the big systems are that they may run into done by big nation states that require billions of dollars of investments. We have whole organizations who think about what those platforms might look like. We have whole organizations that go out to listen what other people are doing, and some organizations that go out and steal what other people are doing. Okay, that’s not what you’re talking about. What we’re talking about is kind of disruptive technologies in plain view. What are the kind of technologies out there that we all take for granted, they don’t have obvious military applications and we wake up, we go into a country and this surprises us in a very fundamental, profound disruptive way. IEDs are kind of a good example of that, right? But there are, you know, many more kinds of technologies. The Internet, mobile phones, well next generation wireless toys, all could have an impact to Department of Defense. And so what they asked us to do is don’t think like us because we already know how to think like us. Think like the market. Can you encourage a group of thought leaders from around the world to participate in a system that has value well beyond the Department of Defense of the United States that thinks about disruptive technology and it’s okay if it’s shareable by everybody. You know, we can figure out what we want to do with it and use it our way. Chinese can figure out what they want to do with it and use it their way, the Russians, Israelis, you know, GM, Nokia. If you have a valuable system it should be valuable to everybody. So is there a way to kind of come up with, for lack of a better term, kind of the Wikipedia of disruptive technologies. So what makes a good forecast? Many people here are forecasters. A few of you are actually people who think they, they do predictions. But a good forecast is not necessarily an accurate forecast, right, because it’s really hard to know when you make a forecast whether or not you’re going to be accurate, right? Really hard to do. You can go well, you know, this person has a good batting average but at that moment at the plate that person could strike out, right? So what makes a good forecast? So first of all, in some ways it’s more important to understand the impact of potential disruptive technologies Transcripts were not edited.

OCR for page 83
Appendix D CD D-7 than actually understanding the technologies themselves. What is the world or what could the world look like, right? Hey, we might have gotten it wrong. It might not have been an electric car or a hybrid car. It might be some other kind of car, another kind of vehicle. What’s important to realize is, in this particular view of the future, that we may not be using cars that consume petroleum. In some ways that is more important than figuring out this specific technology this week, which we think is going to cause that to happen. You should increase the lead time for stakeholders to plan and address potential disruptions. In the range of potential impacts that are out there a good forecast gives a person a view to help them prepare and increase the time in which they begin to think about how they plan and how are they going to react to potential futures. This is also very important. A good forecast should allow somebody to slightly change the odds from 100% random to slightly better than random. So should think of it as card-counting in Black Jack. Doesn't guarantee at any moment in time that you’re going to have a winning hand but over the long term of playing the game out, you beat the house odds by just changing it just a little bit. A good forecast is like counting cards. Doesn't guarantee a win, it just begins to subtly shift the odds in your favor. And most importantly and a lot of forecasters forget this, is at the end of all the forecasts is what do we look for to see whether or not a forecast is coming true or not coming true? What are the signals, what are the signposts, what are the thresholds, what are the tipping points that we should be out there listening and monitoring for to say oh, my God, it’s happening? So think of it as a chess game. You’re sitting there and you’re playing a Grand Master and the Grand Master looks at the chessboard, in about ten seconds says, “Oh, I see a pattern here. It just kind of looks like that game. I know my next eight moves.” To a novice, you look at the board and go, “I don't know what the heck to do next.” So an early warning system is kind of having what I call that opening book in a chess program, right? Now how can we fill that opening book, those pattern recognitions that allows somebody to say, “Hey, this might be coming true, this may not be coming true”? So when you would see me down whining in the TIGER Committee – so the TIGER Committee is this standing committee for the National Academies of Science in which they put really, really smart people and a few not so smart people, like myself, in a room to think about these problems. And we were just sitting around whining about how poorly we have done in forecasts. The Department of Defense, the intelligence community has effort after effort after effort to produce what is fundamentally the same list of stuff. So the general process is we go out, we might use the Delphi method, we might go out and do a survey or we might have some analytical exercise and you always come out with the same list. And we kind of say why is this list always the same? There’s always bio, nano, you know, computation. Recently we’ve added neural, you know. There might be two more layers of depth in there but it’s always the same list. And if you go back twenty years and kind of look at historical forecasts, there’s always the same list. But what was amazing is the list, how inaccurate and how wrong it is. In fact the greater level of experts participating in the forecast increases the likelihood that that forecast is going to be more inaccurate, which is kind of weird, right? You stand a better chance of looking into the future by asking people who read science fiction with no education than asking people who are highly educated in the particular subject matter, expert, and say, “Can you predict the future?” So we said, you know, one of the causes could be because we always go to the same group of experts. You all speak English, all cleared, which, you know, to be cleared it automatically takes even a population from here down to five people so, you know, it’s highly Western oriented, highly American bias. Particularly on the technology side it is high tech bias. We like shiny objects. We like really expensive shiny objects. We like really expensive shiny objects that nobody else can see, right? That’s our bias, you know? If it has like bolts hanging off and a big airplane, right, and if it has vacuum tubes on the inside, we Transcripts were not edited.

OCR for page 83
CD D-8 Persistent Forecasting of Disruptive Technologies – Report 2 immediately dismiss it as so yesterday and sometimes that could cause some of the lack of understanding of what the possibilities are. We’ve very tech focused and we’re very list focused. We’re not impact focused and we don’t explore the secondary effects, which is if you had all these technologies what will you do with them beyond the obvious use of those technologies? Because the most impactful disruptive technologies typically aren’t new technologies but aggregations in a system of existing technologies used in a new and profoundly different way that nobody ever anticipated before, right? So you have these four secondary effects, not just look at wow, you know, it’s nano, it’s really small. Well that’s kind of interesting but what impact, how could that be used to create something else? This is really important for the next 15 or 20 years because there is this gut feeling that we’re kind of like once again, just like the Einsteinian revolution, on that brink where you’re going to have this convergence of technologies, science, quite frankly the human condition coming together to create these really unbelievable opportunities for great disruptions and we just quite don’t know where it’s going to come from or from any one particular field of science. Forecasts typically are going to provide snapshots that are increasingly obsolete. The moment you forecast it it’s over. There is this overwhelming tendency, particularly for people who use the Delphi approach, to go for a consensus view. That’s pretty good when you try and forecast technologies but disruptive technologies you’re really more interested in the tails. So you’re more interested in many cases in the stuff that people dismiss than the stuff that they agreed upon. And so one hunch we have is you should do the consensus view, use that as the mask and ignore it, right, and then get to the tails. And finally, these forecasts are very, very difficult to make actionable. So we spend a lot of time talking to ourselves, talking to other folks who participate in creations of these kinds of forecasts, we talk to technologists, we talk to folks from the Department of Defense, we talk to some people from other countries. We went out to different countries and explore around. And so here’s our hunches. Again these are hunches. There’s no foundation in fact or proof. Our hunches. A good forecasting system should be persistent, right, should be kind of hey, gee, you know, what’s the current thinking, you know, pull it up on your website, be able to kind of go through it and scan it and have it try to be as up to date as you possibly can. So it should be living rather than a moment in time. It has to be not focused on DOD needs because if you focus on DOD you start focusing on things that go boom. Things that go boom takes a certain logical way of kind of building down the path of things that go boom. War may not be about in the future of things that go boom, right? It might be -- remember, war is the final stage of making somebody else do something that they don’t want to do and you’ve exhausted all other possibilities. That is the military’s application of force. There are many other kinds of force and potential force that we may not be considering which may be the definition of war in the future that is not the definition of war today. Third point. Don’t ask the experts. Ask the people who are most likely who are going to be affected by the great disruptive changes. Ask the people who most likely are going to create those disruptive technologies and it’s partly not going to be, though there may be a few in this room, that many who look like us. They’re probably people today who are just kids. And what we call kids, anybody under the age of 30. The second thing of that is we said besides go young, look at what they’re betting their lives on. After you finish your post-doc program, what would be the great program for you to work on next that you want to do, not what your professor wants you to do and what your department head wants you to do. What is it, as an entrepreneur, that you’re willing to risk the next four years of your career and life to go pursue? Ask those kinds of questions. Go abroad and Transcripts were not edited.

OCR for page 83
Appendix D CD D-9 don’t ask them in English. You know, it’s kind of fun. I grew up in a household, I’m the only English speaker in my household. My 4-year-old and 6-year-old speak three languages, English, Mandarin and Cantonese and learning Japanese now, right? They give me a completely different answer in English than they give mommy in Chinese, right? So a hunch is if you ask somebody in English they may give you what they think you want to hear versus listening to what they would normally be talking about in their own language that naturally occurs. And the subtlety of language is really, really important. Assume the world is lumpy. I know everybody read “The World is Flat”. I know we think this is a global world. Technologies impact people differently. Different countries, different technology clusters have different priorities. So if you’re sitting there in the Middle East and you worry about what’s going to be life like when oil kind of is no longer important in the world, and that may be a completely set of priorities than somebody in India trying to figure out how do I deal with billions and billions of people who are starving and get them into the modern world, versus somebody who’s sitting off in Europe thinking about, you know, the next Collider project, right? The world, while maybe relatively flat, I suspect, we suspect it’s very lumpy along the way and understanding the lumpiness is important. One methodology doesn't fit all. You know, we don’t believe, after kind of looking at all these approaches that we can create one approach that will obsolete all other forecasting approaches and our gut hunch is that we should consolidate lots of different approaches into kind of this grant repository, a multiple repository, that …..[Mic noise] This was highly debated, particularly because we are the National Academy of Sciences. Our committee thinks there’s value of engaging the crowd as well as experts. So crowd sourcing we think has a role in this as well as expert sourcing. We’re not a subscriber to either camp that believes one replaces the other. We actually think both are important. How to use the crowds and how to use the experts is something that we kind of wrestle with and try to figure out. Web technologies we think will be very useful. Don’t boil the ocean. We said that already. Don’t launch a Manhattan Project. Any forecasting should have more than one future being prognosticated and we think backcasting may be very useful as a tool to kind of figure out how to develop a signals pattern that can actually be in the monitor and it needs to be impact focused rather than ….[Mike noise]. So forecasting disruptive technologies. Four key things that we think that any particular forecast or any particular technology or impact should include. One, it should include a vision. Forecast a reality describing the vague way. Trying to be too specific is actually a bad thing in many cases. It should include a measurement of interest or measurements of interest. You know, what’s the thing that will change if you change the tipping point could be the cost of energy stored in a unit, a mass, that once that number crosses a threshold, that is the key thing that starts everything flowing. There should be some signpost. Hey, you know, these things happen. You know, there’s an indication that this either will happen or might happen or can’t happen, and then the actual signals themselves. Report 1. You guys, I don't know if you guys had a chance to read our lovely Report 1 but it’s long and boring and will put you to sleep. But it did have these six major sections, which is, you know, basically like just looking at the past, looking at the forecasting approaches, some things that we talked about and discussed a lot of issues around bias because we think that was a really, really critical thing that basically handicaps most forecasting approaches. And then we looked at some persistent systems. So why are we here? We want a lot of new ideas and some old ones. We want to learn from the experience of folks, we want to explore some new methodologies as well as figuring out what existing methodologies could be used in a unique way that could add Transcripts were not edited.

OCR for page 83
CD D-10 Persistent Forecasting of Disruptive Technologies – Report 2 to the answers. And we want to develop a framework for Version 1.0. So when we thought about Version 1.0 and somebody said, “So what is Version 1.0? I mean, what do you guys really want?”, first is start with the output, right? Whenever you design a video game, start with what the screen shots will look like, right, because if the screen shots aren’t that exciting and the kids don’t want to play it, then it doesn't matter all the great algorithms you have in the back side, doesn't matter how good the input was, right? Start with the output. Then think about what are the sources that you have, what are the resources that are out there that you can actually provide for useful input once you figure out what people would actually use on the output. Then define the methodology. Come up with a simple block diagram of both the human process as well as the machine process. It’s not just a machine, you know, of computer science mapping. It’s that you are going to go through a persistent forecasting system where there are humans, there are computers, there’s information sources. Can you define kind of a high level block diagram of what that would look like? Could you come up with a way of tracking signals and tipping points and then at the end of this all the reasons why this is going to fail, won’t work or some of the challenges that we’re going to run into. So somebody asked me what’s the ideal output look like. Now this is my gut. I don’t want to bias you to work on this list. But the thing that fascinated me most was, you know, in my prior role I ran In-Q-Tel for the Central Intelligence Agency, which is kind of a venturing organization to go out and get good ideas in Silicon Valley and other places in the United States. And so the CIA comes out with this book called the “World’s Fact Book.” Basically it’s this book and you go by country and it lists kind of all the key attributes of that country and some of the issues it has. So that kind of biased me in saying gee, you know, it would be great to have kind of the “World Fact Book” for forecasting. I can flip to a country, you know, I can kind of go and say well, Georgia, what are the issues in Georgia today? Now what’s your technical bets? What are your universities thinking about, what kinds of technologies are going to impact them? But most importantly, what are their big knotty problems? I suspect that if you try to figure out people’s problems, you put resources into solving problems or creating opportunities. So if you had an output that basically didn't say, you know, here’s the world and here’s the ten technologies that impact the world, I think it would be more useful to say, you know, by country or region or by technical cluster, here’s the problems and opportunities they’re going to work on. Here’s how they’re beginning to think about the problems. These are interesting sources of technologies and uses of technologies. So there’s a bunch of questions that we’ve got to ask ourselves if we actually build this system. A), would anybody use it? A good question that we get repeatedly is since this is kind of sponsored by the Department of Defense, why would anybody else in the world participate? Why should they even trust the system given the history of U.S.-based technologists? One argument says well, you know, the Internet was kind of created by the Department of Defense. How about this thing? We don’t know the answer to that but it’s a key question because we can build a great system but we’re not sure, if you believe it can be a great system, we’re not sure anybody would actually use it. If nobody uses it, it’s not a great system. So figuring that out is important. There were some arguments by National Academies' members whether or not this was technically feasible. I’m less concerned about that. It’s just an issue of people not being able to see beyond their noses. And as I said, you know, what’s the minimum level of effort to test the viability? What’s the least amount that we can do to see if this is going to have traction? So that’s my airplane ware and, you know, airplane ware, typically about 80% of it is raw but is a good starting position to begin to think through what the problems are. Transcripts were not edited.

OCR for page 83
Appendix D CD D-11 GOLDHAMMER: Gilman, would you also just walk us through, as a way to start here, this? SCHWARTZ: Jessie, three other people came into the room. GOLDHAMMERGOLDHAMMER: Yes. Let’s do quick introductions for folks who just arrived. Thank you, Peter. VONOG: Hello. I’m Stan Vonog. I founded two startups here, currently second and Gilman is an investor, his venture firm is an investor in our startup. And I come from Ukraine and I was educated in Russia, mostly at the Institute of Physics and Technology. And I won a couple of software design competitions worldwide and presented to many people like Bill Gates, etc. in Russia, all kinds of cool technology. So I’m very interested to be here. It’s all very interesting and I’m excited. GOLDHAMMERGOLDHAMMER: Great. Who else joined us? Yes? CULPEPPER: I’m Mark Culpepper, Chief Technology Officer at SunEdison. We basically do distributed generation of photovoltaics (PV) systems on commercial government utility rooftops. My background, my agency degree is international economics. I’ve been in technology ever since college primarily in infrastructure, what I would call lightweight infrastructure, so data communications, telecommunications and then transitioned from that into disruptive generation and PV about four years ago. GOLDHAMMER: Great. And one more? Lynn? CARUTHERS: Oh, hi. GOLDHAMMER: We’ll introduce Lynn in a moment. Hang on, Lynn. Gilman, would you mind -- We’re going to use this as a basis for some of the discussions. If you could just give us a high level description of what this is, perhaps answering some of those questions, that would be very helpful. LOUIE: So let me kind of back up. One of the challenges we had when we did our first draft report from some of the monitors – monitors are kind of like people who review your paper to see whether or not it’s publishable, every much like submitting your work as a Ph.D. candidate and we do have peer reviews. It’s a very important process. And so one of the concerns that the reviewers had is, you know, this is really interesting, provides a great background but, you know, you haven't given the readers enough of a framework to think through how you would even go about building out a system to accomplish the goals that the committee wants to accomplish. So one of the recommendations is to put together a flow diagram or traditional block diagram, you know, one or the other or some hybrid approach, that basically describes what you’re talking about. So let me start off by saying the following. This is raw, okay? So please don’t take it too seriously. We understand that it’s raw but – and let me explain why we think it’s raw and then why we think it’s still valuable. In the old days most of you -- about half of the room can remember. In the old days computer systems used to be batch operations. You used to take a pile of cards, used to submit it down in the basement of some building somewhere, a bunch of geeks would load it up and the next morning you’d get your report, right? And so when CRT started showing up, or even Teletype 33 started showing up – you’ve got to be really old to remember what the Teletype 33 is, but when those started showing up people started thinking about gee, you know, this computational environment, this real Transcripts were not edited.

OCR for page 83
CD D-64 Persistent Forecasting of Disruptive Technologies – Report 2 and, you know, try them all. The things that work, keep them, the things that don’t work, throw them out. And then you have this committee, right, and part of the thing about being on the committee is you should come up with new, exciting ways to incentivize people. SCHWARTZ: like a Nobel -- GOLDHAMMER: Great. Paul and then Jennie. SAFFO: You know, the Office, the Congressional Office of Technology Assessment, was not -- [General laughter] [Simultaneous comments] SAFFO: -- it was simply defunded. They never eliminated it. And the shell is sitting there. SCHWARTZ: Does it still formally exist? SAFFO: It still formally exists. It just doesn't have any money. And so it sounds kind of like a Frankensteinian portmanteau kind of construct but, you know, if you sucked it that in with the NAS it would give you the kind of political cover that would protect you against members of Congress. So next time Peter doesn't get beat – I’m sorry; I won’t talk about that – but it’s, I know it sounds impractical in many ways but -- SCHWARTZ: Brilliant. Do it. SCHWARTZ: If there were an OTA we might not be having this meeting. UNKNOWN: Yeah, yeah. HWANG: I just wanted to add on a couple topics discussed here. One is about the open source, outside/inside the government, the other DOD, the other is international involvement. In my engagement – for the last twenty, I think so, twenty years is kind of a break point. Before that then the DOD really had different principles, you know, from the last twenty years. I think the early 1990’s really break point, the principle is really intended delivery wanted to have outside to really test out whatever the technology concept was and really, you know, put us a criteria. You know, we were not going to adopt anything until commercial sector were really able to prove it, you know, it is real viable. So, you know, that’s one thing. [..?..]break point, about twenty years. I would say about the 1990’s. Ken, you know, you can make a comment on that. You were directly in that. My involvement is really, you know, kind of like that -- [..?..]. Okay. PAYNE: I think -- I’m sorry. Go ahead. HWANG: Okay. I just wanted [..?..] get off on this. And not only international involvement, almost all National Academies committees deliberately invite international members' participation if it’s feasible. Feasible means we can identify the people. So always, you know, has international involvement, at least the committees I’ve been with there’s always, we have some international. So that part is not, it’s almost like standard practice. So anything comes out of National Academies I have to – I think I can say, we always have some element of international. Transcripts were not edited.

OCR for page 83
Appendix D CD D-65 GOLDHAMMER: Great. Ken, did you want to say something? PAYNE: No, just Jennie was asking me – it was in the nineties they kind of went to, you know, "Oh, wow, we want to go to the commercial off-the-shelf as much as possible" and, you know. Now they did it wrong. They get something – they get a system like SAP, which, you know, makes you change your business process, but then they’d hire somebody to build like bridge software so they could still do it the way they did it before. [General laughter] [Simultaneous comments] [Laughter] PAYNE: I was in there. I saw it happen. I mean, very important, made a lot of money off of [..?..] on that. But, you know, I was like, “Why’d you get SAP in the first place? Why’d you get a software [..?..]?” You know, but commercial off-the-shelf was one of those things that – and, you know, good reason. I mean, it’s proven, your timeline is not that long, you know, as long as you use it right it’s not that bad. But as typically happens in government a lot is that they go overboard with it. And so like Gilman says, they wait ‘til it’s proven outside then they say, "Okay, yeah, we can use it," because people don’t want to be culpable for anything. And for a group of people whose jobs is pretty secure, are pretty secure, it’s amazing how risk averse they are. [Chuckle] But – SCHWARTZ: That’s how they protect their security. PAYNE: Yeah, I guess. [Simultaneous comments] McCORMICK: This one might be kind of controversial and it’s a little bit out of scope in some respects but it just strikes me -- I had a long conversation with a VC this past week and I thought it was a pretty interesting conversation from the perspective of we educate some of the – most of the Ph.D.s around the world in the United States and then our current process and our current thinking is, you know, we make it almost impossible for them to basically get a green card and a visa to stay in the United States. In some respects, if we give somebody a Ph.D., we should be giving them a visa to be able to stay and keep the innovation here and make it easier to monitor at the end of the day. SCHWARTZ: That’s what we used to do. McCORMICK: Yeah, I know and we don’t now. TWOHEY: I actually have this issue right now in a startup I'm doing like today, I mean, like this is a real issue. UNKNOWN: [..?..] has it too. VONOG: Well, I would say it’s much better than many other countries, the visa system. So in places like Russia it’s just like tourist visas, working for [..?..], [..?..] Russian citizens, and I studied there for eight years so it's a bit of a pain. And if you want to work it’s just like… So it’s not that bad. And in a way if you’re a great Ph.D. you can always find work here if you want to stay. It’s not a problem really. Transcripts were not edited.

OCR for page 83
CD D-66 Persistent Forecasting of Disruptive Technologies – Report 2 McCORMICK: It’s getting much harder. [Simultaneous comments] VONOG: I mean, all my friends who are Ph.D. wanted to stay in the United States. GOLDHAMMER: Do any of the committee members have any specific questions or issue that they’d like to raise in the time that’s remaining to us here? Yes, Harry? BLOUNT: One of the things that I’m not sure I heard from any of the tables in detail was – yet we ranked it highly I think on everybody’s sheet, was this concept of anomaly processing. And I think we only superficially touched on it and I guess the question is if we’re going to successfully run a platform with very few people, that means you’ve got to have some very effective anomaly processing tools over time to do this. So did anybody hear during the process or have some background in seeing tools that are very, very good at processing the edges? STRONG: Well, part of the process that’s there on table two is a process of, number one, identifying measures of interest and then, number two, doing general monitoring of those for statistical anomalies. And that is a – it’s a built-in part and it is pretty much automatable. BLOUNT: And is there something out there already? McCORMICK: There’s a startup ten blocks from here called Twine that – BLOUNT: Twine? McCORMICK: Twine, and they’re working on some really interesting stuff. If you want me to, I’ll introduce you to the CEO. STRONG: It’s, there are, yeah, there are a lot of people who are doing this kind of thing. McCORMICK: There’s a massive amount of research going on between Google, Twine, Bing, I know Yahoo!, a couple like that that are doing. TWOHEY: There’s a guy, Ron Conway, he’s a venture capitalist, his whole thing is, you know, real times. If you hit the real time local trifecta you’re going to get a bunch of seed money from him. I mean, not necessarily for sure, but – [Simultaneous comments] TWOHEY: What I’m saying is that there’s a lot of investment in, you know, different kinds of search things. It’s not just -- there’s another search engine that just raised I think a couple million dollars in funding this week. I mean, like people are still actively spending money, private money, trying to make a better search. McCORMICK: Bottom line, there isn’t a pre-packaged tool you can go out and buy today but there’s a lot of stuff out there you can put together. VONOG: I’ll bet you can tune up Google engine so it finds like lists, you know. Transcripts were not edited.

OCR for page 83
Appendix D CD D-67 LOUIE: Let’s not forget the power of humans. You know, one of the things about the Internet is that it enables mechanical [..?..] to really be a very effective system. And so, you know, as good as the algorithms are and there's some really great stuff out there, probably the -- I looked at table 3 and I say to myself, you know, the brilliance of that is, you know, they’re not – unlike our table, table 2, as "Oh, we'll rely on technologies" 'cause they let the people out there use their eyeballs and their minds to rise things up to the top. There’s huge value in that. You don’t have to do everything by traditional automated processes if you use the network appropriately and you use kind of systems in a very effective way to serve up useful information. McCORMICK: Like humans are still the best engine for figuring out anomalies. UNKNOWN: Yeah, and if you’ve got, you know, a million of them, it’s a pretty good engine. UNKNOWN: Yes. GOLDHAMMER: Harry, did your question get answered or… BLOUNT: I think so. I mean, I think, I agree whole heartedly with Gilman, is that we haven't come up with a machine or algorithm that can do a better job of pattern recognition than we can. I think that’s part of it and I think how it’s displayed, lots of information is displayed, which we really didn't touch on either. McCORMICK: Well actually to add to that, I think one of the biggest issues that I think exists today is not the analytical tools or the people and stuff like that. It’s actually the display. You know, the UI to be able to still filter through vast amounts of information, you know, in a efficient, economical way. That’s probably one of the biggest [..?..] right now. GOLDHAMMER: Norman, did you have a comment? WINARSKY: Not a comment but another question to the group. GOLDHAMMER: Great. A little louder. WINARSKY: The question is, one of the issues that I see is measures of success. I mean, people have been talking about giving prizes and things like that. On the other hand, we’re looking at ten to twenty-year horizons so how do you -- what does the group think about how you decide if you’re doing a good job? GOLDHAMMER: Please, Ray(?)? STRONG: I have a comment on that. Rather than talk about making accurate predictions, which will take ten or twenty years to measure and it isn’t what we’re all about anyway, I look at measure of success is the breadth of preparedness that’s represented by the number of things that you’ve considered, the number of different things and the breadth of that, you know, what it covers that you’ve considered and you have plans to act on if X happens. So it’s being able to, you know, if somebody, if there were somebody who were to generate a question, “Do you have a plan for if a meteor strikes?” or, “Do you have a plan for…?”, you know, generate lots of those questions and have those questions come in, and the measure is what percentage of those questions do we actually already have Transcripts were not edited.

OCR for page 83
CD D-68 Persistent Forecasting of Disruptive Technologies – Report 2 covered by the system and what percentage of the questions will get covered by the system as the system matures. So that’s a -- it’s not are we successfully going to predict a meteor strike. That’s not the point. GOLDHAMMER: Paul, did you have a comment? SAFFO: No. GOLDHAMMER: Okay, Mark. McCORMICK: I’d actually like to add something to that. If you think about it, as soon as you figure something out that you didn't know you didn't know, it changes your thinking, right? And just by having a constant stream of that emerging, you know, it actually makes you more aware. It changes how you make decisions, it changes how you view the world. WINARSKY: So I agree with that. So how do you quantify that? McCORMICK: Well it’s, you know, to a large degree it’s the efficiency and effectiveness of what we’re talking about, is the ability to be able to identify what you didn't know you didn't know. That’s the fundamental problem I think we’re talking about here. It’s not what you know you don’t know 'cause let’s face it, there’s tons of that stuff that’s out there. LOUIE: You can measure it by – 'cause, you know, at the end of the day a forecast has to communicate, it has to drive some [mike noise] SCHWARTZ: See, I think that’s the measure of success. Do people respond – is the quality sufficient that it motivates a response? And if the answer is it’s too abstract or unclear or too far out, you know, then nobody takes it serious. It’s not a success. It’s a success if it motivates an appropriate response. NOLAN: Well wait. I’m worried about that because -- GOLDHAMMER: Yeah. NOLAN: -- a very plausible sounding narrative can motivate action when – SCHWARTZ: It could be wrong. GOLDHAMMER: Yeah. SCHWARTZ: It could be wrong. There’s no guarantee that you’re right. That’s why it’s not the accuracy 'cause you don’t know that. The question is does anybody take it serious enough to do anything about it and you may be wrong. That’s a risk you take. NOLAN: It feels like there’s, in my mind, maybe quite a step change between what Ray’s talking about, which is identifying something clearly enough that you could think through the response and motivating people to actually take more action against it. SCHWARTZ: Yeah, I’m going one step further. It isn’t simply the understanding. It’s actually, it motivates – okay, we’re going to go launch an R&D program in X; we’re going to focus attention on monitoring Y more deeply. Transcripts were not edited.

OCR for page 83
Appendix D CD D-69 UNKNOWN: Yes. SCHWARTZ: If it produces a response, then it’s a success. NOLAN: That sounds like a [mike noise] incentive program. UNKNOWN: It incents hype. SCHWARTZ: No, I -- we can talk about this offline but I don’t think so. [Simultaneous comments] WINARSKY: My worry about that approach is, you know, if we had predicted two years ago that the banking industry would mostly collapse, the persuasion of people to act on it is not a good measure but still might have thought it’s very unlikely, you’re crazy. So in some sense we want -- ZYDA: Had you announced that you would've had the Secretary – the Fed making sure it collapsed bigger than it did. [Chuckles] LOUIE: Yeah, but it isn’t that you’re just taking action. You know, I want to blend these two concepts together. SCHWARTZ: I completely disagree with what you said, Norm. Let me just -- I think you’re completely wrong. The Federal Reserve should have had and did have until 2004 a group whose job it was to anticipate financial crises. And in fact Greenspan eliminated the group in 2004 'cause no financial crises are any longer possible. This is not a possible future event. We now understand markets so he eliminated it. WINARSKY: In the Greenspan world this would not have been taken seriously. SCHWARTZ: Well my point is simply that in fact that entity ought to have had a group who said "What would happen if?" Now they don’t go out and publicly say, “Hey, we’re worried about Lehman Brothers.” That’s a different question. But they’d take appropriate action in anticipation, monitor it closely and if they see it beginning to emerge they act in a timely way. So in fact I think it is quite plausible. LOUIE: It is, it’s -- GOLDHAMMER: Gilman, go ahead. LOUIE: Let me just finish that out, which is I want to blend these two concepts together 'cause I think the value is in the blending of these concepts and not one by itself. That is successful forecasts should provide a roadmap of potential future outcomes that is actionable and trackable. SCHWARTZ: Right. Bingo. LOUIE: If the event never takes place, you should see it in the signals because you’re not hitting those signposts. Transcripts were not edited.

OCR for page 83
CD D-70 Persistent Forecasting of Disruptive Technologies – Report 2 SCHWARTZ: That’s right. LOUIE: If you’re hitting signposts or there are new signposts being hit that you didn't even think about, that’s going to tell you something about the quality of the forecast. SCHWARTZ: Exactly. LOUIE: Okay, so it is not that you actually spend money and it’s not necessary just the hype cycle, the hype is fine as long as you can map it out so you can say, "You know what, based on what we’re tracking, it’s hype," or "Based on what we’re tracking, oh, my God, the Chinese are on to something," or the Americans are on to something or Google is on to something because they’re hitting these target – these measures of interest are beginning to click off. And then ultimately to have the plans in place to say, "We have enough of these milestones that we’ve been hitting, these signposts that we’ve been hitting, and these units of measure, that we’d better start putting together plans," right? So that’s what I consider a successful forecast. SCHWARTZ: Exactly. GOLDHAMMER: Comment over here? TWOHEY: I just want to know, so what were people -- so it’s twenty years after 1989, right, so there’s this – and in the late Eighties there’s another financial crisis. You know, people were always worried about these scenarios. So do we have any data to backtrack what we did twenty years ago and whether it worked or not? I mean, because it seems like we’re making this entire discussion here -- UNKNOWN: We repealed the [mike noise]. [Chuckles] TWOHEY: Wait. What I’m saying is this entire discussion we’ve had here, right, has been divorced of feedback mechanisms for things that might have happened a while ago. So maybe we should list those and like see what worked and what didn't before we just guess. BLOUNT: So I spent a lot, spending twenty years in the financial markets I spent a lot of time looking at this and you get into the risk and the opportunity. It’s useful to look at history as long as you don’t try to tightly model history. LOUIE: Right. A good one is the Saddam Hussein’s weapons of mass destruction. So if you think about it from an intelligence analyst point of view, the number one failure prior to that was our inability to track nuclear weapons testing in places like Pakistan and India because we fell susceptible to deception. So here comes this guy who used to have nuclear weapons we know is a chronic liar saying he doesn't have them, all right? So the pattern is okay, "Last time we screwed up because we didn't catch the liar. We’ve got a known liar and he’s saying he doesn't have it. Therefore it must be a lie. Therefore he must have it." You can construct a model to prove that he potentially could have it based on the same set of facts that would have proven that he didn't have it. So there is a danger, and this is the dangers that experts run all the time, is they assume the past is a Transcripts were not edited.

OCR for page 83
Appendix D CD D-71 predictor of the future. It’s something you should look at. You shouldn't be ignorant about it, but it’s not necessarily a predictor. TWOHEY: I’m not saying we’re – like it’s important that [..?..] were successful or failed in the past but just what incentive systems did we use and what outcomes did they produce. Because if we used incentive systems and the outcomes all sucked then we shouldn't go use the same incentive system again. McCORMICK: Well, so we're basing it on behavioral, what’s the behavioral -- TWOHEY: That’s all I’m saying, is just maybe like looking at that might be worthwhile. GOLDHAMMER: We are almost out of time here. Gilman, did you have any final questions that you wanted to ask to the group? LOUIE: So one of the things that we’ll get tested on, you know, when we write all this stuff up and we go through it tomorrow is we get evaluated by groups of experts who will come back and ask the question "Now this is really all interesting but it doesn't seem possible." So my first question to you is, you know, kind of going through all these interesting approaches, is there anything in your mind that says that any one of these activities or potential approaches is an impossible act just to do, right, just to be able to construct an organization, whether it predicts good forecasts or bad forecasts, is there anything in here that you worry about that is a show stopper that we’re kind of assuming that could easily happen, that an expert would come back and say, "You can never produce this forecast because this is an impossibility?" REED: I think the time horizon is way too long. I mean – SCHWARTZ: I’m sorry. We can’t hear you. REED: Oh, sorry. The time horizon, the incubation, the period between the creation of the technology and the time at which the technology actually causes a disruption can be less than ten years. And so you’re not going to -- how can you forecast out ten years? McCORMICK: I think it’s also going to depend on the vertical that you’re talking about or the generation you’re talking about. UNKNOWN: Yeah, yeah, agreed, agreed. McCORMICK: So some, yeah, it’s a six month horizon, some of them it’s a twenty year horizon. REED: Right, I agree, I agree. But I think a lot of the big ones that we’ve seen lately, like the Twitter in Iran -- McCORMICK: Oh, social media. REED: Yeah. How could -- I mean, granted, you could have predicted it maybe a year or two ago but I mean, ten years ago there wasn't even Twitter. McCORMICK: I think actually one of the big dangers with it is not so much this stuff is not possible, it’s that things like that that people would externally say you should have been able to predict Transcripts were not edited.

OCR for page 83
CD D-72 Persistent Forecasting of Disruptive Technologies – Report 2 that you couldn't predict and therefore your system is fundamentally flawed. There’s almost like a political thing that’s got to go around this to say, "Hey, this is not perfect." UNKNOWN: Yeah. GOLDHAMMER: Stewart, question, comment? BRAND: Just wondering how you can – make sure you’ve got some early wins. If this thing doesn't have early wins and the promise of early wins, why the hell fund it? LOUIE: Well the argument that we consistently run up against is why is this approach better than talking to a group of experts? They’re experts. We can trust, we can look at their Ph.D.s, we can look at their prior success and expertise in the field. Why is a group of crowd source experts, foreign nationals who are prone to deception, going to produce you anything useful relative to the existing approach? That is a question we’re just going to have to deal with. That’s a question that we’re going to be facing. BRAND: Well think about it. The early win there is you’ve got these incredible people, great experts. They’re in the room, they’re talking to us. Wow. So you’ve already got a kind of success story right there even though the product may be irrelevant. So I’m trying to figure out a way to – how do you make the -- GOLDHAMMER: How do you redefine success? BRAND: -- how do you redefine success in a way so that you can go along for five years without any disruptions to report. I don't know, do you retro-predict stuff that already exists or things like that? It’s an interesting design problem. Well, how do you get a project, a long-term project like this, funded. SCHWARTZ: And sustained. [Simultaneous comments] McCORMICK: I want to come back to your, the business model question. There is one thing I was just thinking about that’s not the short term but long term. If you take the premise that there’ll be more technology, which is proven out in history, there’ll be more technology introduced in the next ten years than the previous one hundred and that’s been true for almost every previous ten years that’s existed. At some point the scale of innovation, the scale of information that comes into this process gets to be almost unmanageable, you know. And right now it is manageable but at some point you do need to start looking at automating some of this stuff. You know, I think that’s probably the bigger issue. It’s you’re kind of, as an organization, you’re behind the eight ball to be able to keep up with the level of innovation that’s taking place. LOUIE: I think that there’s another inherent problem about disruptive technologies. Until they actually appear and become disruptive they’re fundamentally unbelievable, right? You can never predict at a hundred and – what’s the Twitter limit, 142 characters can change the world. It’s like a stupid idea. It’s the dumbest idea out there and all of us VCs who saw it thought it was a stupid idea. Transcripts were not edited.

OCR for page 83
Appendix D CD D-73 McCORMICK: Well eBay’s another one. It’s like, you know, what are you going to do, a yard sale online? All of a sudden boom, it becomes a new channel. GOLDHAMMER: I think one other thing I noticed just in the conversation, which may in part answer your question, is I think, at least on the technology end of things, there are actually quite a few things that are quite doable either algorithmically or through some other kind of technology. There are lots of different experts configured in lots of different ways who can answer a lot of these questions or can evaluate hypotheses or can generate narratives. But the part of it that, there’s this sort of interstitial part that is very hard to talk about and it’s the coordinating function, which is how do you actually get all these pieces of interact with each other in ways that produce good results. And this is fundamentally the problem of any large, any – it’s the problem with any organization. There’s a cultural element to it as well and it’s the kind of thing that seems like – and it’s, it can be done but it requires some attention to really understand how do you weave these things together in a way where you’re not just, it doesn't look like a quilt where you’ve just patched together a couple technology solutions, a bunch of experts, sub-predictive markets, boom, out come your forecasts. GOLDHAMMER: It’s the incentives. [Simultaneous comments] LOUIE: You know, the thing about what you just said that strikes me is that the probably the question that would not be asked by us of the experts, that we should ask ourselves is in any one of these kinds of new endeavors it requires leadership and a visionary to make a group of people think the impossible is possible. The absence of that or that small group of individuals who are going to go off and change the world by building any one of these systems or some hybrid version of it, it is probably highly unlikely that a group of well educated, pretty smart, good engineers and scientists can build a system that fundamentally aren't driven to [mike noise]. And that’s probably the biggest risk in any of this, is to go down, to assume that you can follow a Betty Crocker cookbook recipe out of a just freshly published National Academies report and build one of these things, it probably will fail. UNKNOWN: Yeah. McCORMICK: To your point, software, it’s always what, Version 3 that actually works? UNKNOWN: Yeah. McCORMICK: You’ve got to have the staying power to get to Version 3 and then redo it and redo it and redo it as the market changes. GOLDHAMMER: Unless there are any other burning questions from the committee, apart from Ken – [Simultaneous comments] [Laughter] PAYNE: I think it’s the last time the committee’s together, right? TALMAGE: Tomorrow. Transcripts were not edited.

OCR for page 83
CD D-74 Persistent Forecasting of Disruptive Technologies – Report 2 PAYNE: At least when your sponsor’s going to be around, right? UNKNOWN: Yeah. PAYNE: So with you here. UNKNOWN: Right. [Laughter] PAYNE: And from all of us, from DDR&E and DIA, I'd like to thank everybody on the committee and folks who participated in the workshop. [END OF RECORDING] Transcripts were not edited.