A Scientist’s Impact
The Wright Brothers. Thomas Edison. Albert Einstein. When inventors and scientists discover Great Things, we shower those individuals with praise and fame. But does this culture of rewarding first movers make sense? Why is who discovers something really the key issue? Isn’t the effect to create a gold rush atmosphere where we advance technology at breakneck pace regardless of its human impacts? And isn’t ultimately what is most important not who discovers, but how the discovery or invention will affect the well-being and flourishing of humanity?
The usual argument is that we praise individual inventors because their genius changed the world: Their intellect took humanity a step further, by uncovering profound new understanding of the world. But when you start to look deeper, the logic becomes tangled. Without the Wright Brothers, would we never have flown? If a certain nameless caveman hadn’t invented the wheel – would we still be stuck in caveman ways?
No – sooner or later, someone else would have discovered the wheel, or powered flight. It seems clear that certain scientific discoveries are inevitable. In other words, the classical heroic theory of invention is too simple.
Multiple discovery is the (surprisingly frequent) situation where independent scientists discover the same thing, independently, at around the same time. It’s hard to view an inventor as a truly singular genius when several other people simultaneously make the same exact discovery. Calculus was discovered by Newton and Leibniz, at about the same time. And evolution through natural selection was independently discovered by both Darwin and Wallace. And there are many more examples.
The explanation for multiple discovery is not that there’s some paranormal energy allowing ideas to flit from mind to mind through telepathy. No, the insight is that most discoveries and inventions build incrementally upon previous discoveries and inventions. You need an engine to build a car, you need a piston to build an engine, you need metallurgy techniques to make a metal piston. In other words, invention doesn’t happen in a vacuum, there’s a dependent order in which things are likely to be discovered.
Science and invention unfold over time. Each new scientific discovery makes possible things that were previously impossible. An electric motor makes little sense before electricity is discovered, just as computer software is impossible without a computer. But, given the discovery of electricity, the invention of the electric motor may be inevitable.
The point is that the true impact of a scientist who discovers something new is not that the thing would not otherwise have been discovered. That is, if the Wright Brothers hadn’t created an airplane, some other inventor would have eventually. So the hero worship of individual scientists just for discovering something new makes science into a race, where we award ribbons for getting someplace new first.
This dressing-down of individual scientists isn’t mean-spirited, but is done to bring attention to the broader picture: If most discoveries are inevitable, then what impact does an individual scientist have? What factors in scientific discovery actually change the history of the world? It turns out that the most important impact may be the relative order in which discoveries are made.
History and Path Dependence
How society unfolds is path dependent; that is, past events influence the future. For example, imagine how differently World War II might have ended if German scientists had invented nuclear weapons before their surrender – or if medical technology had enabled Abe Lincoln’s wounds to be treated successfully, preventing the success of his assassination.
In other words, the timing of discoveries may be what most alters history. So a scientist’s impact on the world may more result from her choice of field and focus. Think of it this way: Right now we have many more scientists working on developing advanced weapons technology instead of how to feed those who are starving. Thus we might expect weapons to become increasingly sophisticated, while progress in combating starvation to be relatively slower. As data scientist Jeffrey Hammerbacher once said: “The best minds of my generation are thinking about how to make people click ads. That sucks.”
The overarching idea is that while many discoveries are inevitable, there is likely wide variance in when such inevitable discoveries could be discovered. Thus as a society we can influence our own history by strategically allocating our research resources. An interesting question is how are such resources currently allocated, and what are the possible outcomes of the status quo?
How Markets and Governments Focus Research
The free-market economy has been enormously successful in driving innovation. The basic idea is that entrepreneurs and scientists can capitalize on their innovations by crafting them into products or services that they can sell to others. However, it’s not all gravy: Not everything Important is Profitable. More problematically, there’s no promise that the invisible hand resulting from our our collective profit maximization has foresight and wisdom enough to avoid global crises or existential risk.
In other words, blind faith in capitalism is likely not a solution to all our problems; worse, the consumerism it breeds seemingly trades the larger philosophical intricacies of life for the distractions of reality TV, clickbait, and an endless fire-hose of mobile apps. A danger of capitalism is that it incentivizes us to exploit our own psychological weaknesses to the detriment of all.
Governments also fund research, some of which does aim for humanitarian causes. However, governmental research too often supports business or military interests. For example, 50% of 2013 federal R&D dollars went to the department of defense, and Canada’s research agenda is increasingly set by business interests.
The allocation of business and governmental research likely reflects areas in which we will see most future progress: Military developments and profitable technologies. Yet as weapons and technologies get more powerful, their possible destructiveness increases. For example, while nuclear power and genetic engineering can benefit society, the flip side is that nuclear weapons create the dark possibility of mutual assured destruction, and biological technologies also enable creating new terrible biological weapons.
The Technology of Morality
Thus an important question is: Are we morally responsible enough to handle the responsibility such technologies place in our hands? The commercial rush to develop new profitable technologies, and the governmental rush to develop new weaponizable techonologies ignores this question – or blindly attributes wisdom to “the market” or self-interested (and often short-sighted) governments. Yet failing to address this key issue could be society’s downfall: If we do not learn how to responsible shoulder the moral responsibility of increasingly powerful technologies, it may only be a matter of time before a moment of moral weakness unleashes destruction on the globe. Indeed, there have already been a number of near-nuclear wars.
While we may not be able (or want) to slow down technological development, perhaps we can reallocate resources to investigate technologies for enhancing our morality. The idea is that we are basically slightly-evolved apes, and that our morality, while slowly improving over time, has not kept pace with technological growth. Perhaps it is essential to divert focus from creating new weapons or simply profitable technologies, to seriously examine biological modifications to our morality.
Our current path of technological development is dominated, not by individual genius scientists or the interests of humanity writ large, but by a rushed effort aligned with the agendas of business and governments. As a result, the more fundamental question of human well-being and flourishing is pushed to the periphery.
For us as a species to handle responsibly increasingly powerful technology (i.e. not destroy ourselves), it may become critically important to allocate more resources to research pathways that appear promising for augmenting our own morality (whether through better moral education or technologies that modify our own psychology). While changing our morality may at first seem like “playing god,” the idea is to bring within reach attaining the virtues of moral paragons, which appears nearly impossible for almost all of us. Ultimately, we all strive to be better people, and our future may critically depend upon us reaching that goal.