Michael Bikard has conducted the first twin studies of new knowledge, looking at how and why some ideas succeed and others fail.
A common phenomenon in science is that people have the same idea at the same time in different places. Your research looks at how that happens and why one idea becomes accepted or commercialised and the other idea doesn’t.
Yes. One of the basic ideas that underlies this research is that a lot of opportunities, whether technological or entrepreneurial, are not observed if somebody doesn’t take them, and so if you want to study, say, how opportunities get discovered, you have a problem because you observe only successes not failures. The idea of using simultaneous discoveries was really to try to do that systematically for the first time. What we know is that there are a lot of instances where people will say, well, I had the opportunity to do that but I didn’t do it, so we know that failures in discovering opportunities are very common.
To take an historical example, for instance, the original patent on the telephone from Alexander Bell was invented at the same time by Elisha Gray. Yet we don’t really know Elisha Gray; we only remember Bell. Gray was on to something big and he didn’t do anything with it.
So I have collected a dataset of a lot of cases of people making the same discovery at the same time, and some of them are very successful with it and others aren’t. So for the first time, I can actually see opportunities that people don’t take. A lot of my research looks at how science gets commercialised and how people create or fail to create economic value based on scientific knowledge.
Now, the big challenge in understanding that question is that when you see a scientific discovery is not being used, you never really know if it’s because the person is missing an opportunity or whether it’s just basic science and it is not useful to start with.
For example, erythropoietin (EPO) is the hormone that regulates the production of red blood cells in humans. Its synthetic version is a very important drug for patients suffering from anaemia. EPO was first purified in 1976 by a professor at the University of Chicago called Eugene Goldwasser. He wanted it to be commercialised as he saw that it could help millions of people and also had a lot of economic value. He even told the university to apply for a patent for it, but they didn’t follow up. No one followed up. Goldwasser went to different companies, including Abbott, and they didn’t want to hear from him.
And so one of the big questions, in instances like that is, is it because it’s really basic that people don’t want it, or is it that they’re just not seeing the huge opportunity that Goldwasser saw?
Five years later, a start-up called Applied Molecular Genetics (now Amgen) took on the project and produced one of the most successful drugs in the history of the biotechnology industry.
So why do twin studies?
In behavioural genetics, researchers have used human twins for decades to better understand the relative importance of genetic and environmental factors on people. So with my twin studies of new knowledge it is essentially the same idea: when a discovery is being made and not being used, we don’t know whether it is because of its fundamental nature or because of the environment. By looking at the same discovery in different places, I know that they have the same fundamental potential, and so I can see how one environment will create a lot of economic value and the other will not, and try to understand why that is.
How did you get into this research area in the first place? What was the first impetus?
It’s a combination of things. Being at MIT, I was talking with a lot of scientists. Also, I’m French, and I always wonder if the same discovery’s being made in Boston and Paris, or in Boston and London, whether it is going to create the same amount of economic value? And there is no way to know because when we see the success of Boston with biotechnology, we don’t know if it’s because Boston scientists make discoveries that have more potential for commercialisation or if it’s because the environment in Boston is better.
When I first came into the PhD programme at MIT, I told my supervisors I wanted to compare different environments. They told me I was crazy, because everything changes across environments; you can’t compare them because you never know where the variance comes from. At MIT I looked at simultaneous discoveries, talked to scientists and then it just occurred to me that that’s an opportunity for researchers, to keep the opportunity constant, and to observe what type of environment allows the creation of economic value, based on that opportunity.
What can executives take away from this research?
The underlying message of my dissertation and my research, in general, is about saying that the environment really does matter. It’s not only about the opportunity. There are a lot of opportunities out there that people don’t take, and the environment makes a huge difference. There is an art – and with luck there will be soon a science – of discovering entrepreneurial opportunities, and that will depend on your prior experience and social network.
So, basically, it’s not only about seeing an opportunity or not seeing it; it’s about a match between an idea and an environment?
Not all ideas are adapted to all environments. So if you’re a manager and have an idea for a new product then, in some circumstances, you may want to think about working with somebody in a university to develop it. And in other cases, a start-up would be the best kind of vehicle to develop the opportunity. And in yet other cases, it would be a large firm.
You need to think of organisations as vehicles, or some kind of organism that can grow different types of ideas. Large firms will be a lot better at growing ideas that are more similar to what they already do, so they already have the right capabilities, the right structure, the resources.
Now, suppose that a specific idea is very different from what came before and therefore involves a lot of uncertainty, then you get into a world where large companies are just not great. Large firms have routines and that allows them to be very efficient, but that’s also their weakness, because routines are usually adapted to specific business models, and usually when you have ideas involving a lot of unknowns, you need to figure out new routines. It is very, very unusual that the old routines work. And so if you’re a large firm and you come up with an idea that is not very well adapted to your own kind of capabilities, then you may want to think of licensing it to a start-up, or maybe spinning of a company.
It’s important to be smart about understanding what kind of organisation is better at developing different things. Universities, start-ups or large firms are very different environments. It’s not that one is better than the other. Each of them has their own trade-offs and that makes them more adapted to grow specific ideas better than others.
Your research also looks at the role of universities. Can you explain more?
The most mature project I am working on is about understanding the role of universities in the division of innovative labour, and mostly trying to understand some of the problems associated with using research and ideas coming out of universities. There’s an assumption that it’s very easy to use stuff coming out from universities because everything gets published. But, on the other hand, a lot of practitioners complain that a great deal produced by universities doesn’t get used, and that’s definitely very high on the agenda of policymakers, especially in the US.
I find evidence that scientific knowledge produced in universities is not being used as much as it potentially could be to produce new technologies. And the way I know that is that I can compare discoveries made in universities with the same discoveries made in industry, and my results indicate that the ones made in industry are used more. The most obvious explanation is that university scientists are not all interested in technology development. They focus on scientific research. So even if there is something interesting, they’re not going to pursue it because they don’t have the incentive to do so, and because that’s not their job.
It’s clear that when a firm makes a discovery, then the firm seems to be a lot more likely to use that discovery to produce technologies. But then, perhaps more surprisingly, it seems even third-party inventors use more discoveries produced by firms than produced by universities.
Why do people use more discoveries that come from firms? Aren’t universities all about access to knowledge?
The results are more suggestive than conclusive, but I think they open an interesting avenue for future research. One is an issue of awareness. A lot of inventors seem to not necessarily be aware of relevant stuff coming out of universities, so that could be part of the gap between industry and academia.
The second thing that I find evidence for is an issue of trust. I look at life science, and apparently a lot of firms have tried to commercialise stuff coming out of published academic sciences that ended up being not reproducible. This has created a lack of trust.
The third element is about the way academics approach discoveries. They tend to focus on the fundamental contribution to theory, that’s how science progresses, and so that’s what is in the minds of academics doing science, and that’s how they write their papers.
For firms, on the other hand, even when they write scientific papers, what’s on their mind is not the contribution to science, but whether it works. They want to produce a technology. So the scientists in academia don’t necessarily pay as much attention to whether it works or not whereas the people in firms just care about that. They don’t really care about what the mechanism is and why it works; they just want it to work.
In summary, three elements might make it difficult for inventors to use knowledge produced by academics: awareness, trust and approach to discovery.
So things need to fundamentally change.
In life sciences, it’s clear that the issue of trust is gaining a lot of publicity. Considering the importance of replication in science, this is definitely a good thing. The basic problem is that there is so much pressure on scientists to publish that they might not take the time to conduct as many robustness tests as they maybe should.
You also do research on collaboration and creativity.
One of the main reasons why I started being interested in this topic is that the scientific literature on collaboration has been very positive. A lot of papers have been written that celebrate collaboration as something almost magical, that allows people to become more creative. But when I was talking to scientists, they were not always that happy with collaboration. There seemed to be a lot more problems in reality than in the literature, and so that’s how I started getting interested in it.
The first insight came when I was talking to a friend who is a biologist, and he was telling me collaboration was inefficient and he was losing so much time in meetings. Then it occurred to me that one of the reasons why people were so optimistic about collaboration was that people have been comparing outputs. So if you compare the average output of a person working alone and the average output of people working together then you will always find that the more people, the better the output.
It seemed to me there was a problem in our approach. By comparing outputs, we don’t measure the input that goes into that output, and it’s possible that there is a lot more pain, effort and time that goes into, say, a three authored paper than a solo authored paper. In other words, our approach has blinded us from measuring and understanding major costs associated with collaboration. As a result, we might be too optimistic about it. The risk is that we might ask people to work together while they would actually have better results working separately.
Another important cost that people have not been thinking about is the issue of allocation of credit. The question of the allocation of credit is a new frontier in the research on collaboration. I don’t think that we know a lot about it, so that’s something new I’m exploring with my co-authors Fiona Murray from MIT and Joshua Gans from University of Toronto.
The main argument of our paper is to say, yes, it’s true, collaboration is associated with higher quality work on average, but it has a strong cost in terms of productivity. Besides, collaboration creates a lot of issues concerning credit allocation.
We looked at how credit gets allocated in science. We found evidence, and again, it’s not conclusive, that people are over-rewarded for collaborative work. It’s not clear whether it’s a good or bad thing for society, but it seems that for scientists, it makes more sense to write two co-authored papers than one paper each, alone.
For managers, it is important to keep these trade-offs in mind. If you want to use collaboration to foster the creativity of your employees, you need to know that collaboration also has important costs in terms of efficiency and credit allocation.