I would speculate that the decrease in "good ideas" is at least correlated with, and probably causally related to, the decrease in government spending on research (2% of GDP in the 1970's to 0.78% of GDP in 2014[1]). Considering that the entire US technology industry was originally supported and sparked by government research, I would imagine that a large part of the problem is that corporate R&D is a poor substitute for the longer-term, non-profit-driven perspective of government research. Further, I wonder if, in general, lack of government-funded research leads to less competitive markets that are more amenable to established monopolists, since in that scenario, research is emerging primarily from corporate R&D departments and emerging players in markets can't compete. I don't have any empirical evidence, but I think interesting to consider.
Let's not forget how much of this research was driven by anticipated military needs. The 'D' in DARPA stands for 'Defense'.
I don't think that the US cut military-oriented R&D expenses significantly. Does anyone have and idea how did the direction change?
OTOH, AFAIK, a few of the most pervasive technology changes, like the GUI or mobile networks, were not military-driven, but were purely commercial R&D (Bell labs, Xerox PARC, etc).
ARPA (before the "D" was added) funded a lot of the preliminary work you discuss (e.g. Engelbart's work at SRI was all government funded). They also essentially paid for the graduate school of all the folks who later did the early ARPANET work (which Bob Kahn at ARPA also paid for).
And consider that "private sector" labs like SRI and MIT are essentially government research facilities. Education is only 16% of MIT's expenditures (and 14% of revenues) and where do you think those revenues come from? Hint: most of it is not corporate grants.
Through the 70s the boundaries between corporate and government R&D were often fuzzy. True, Bell Labs wasn't "miliary-driven" but there were close formal and informal ties and remember that they were under a tight consent decree up into the 1980s. It wasn't today's "revolving door" -- think of it as a permeable membrane. This was thought to enable corruption and in the wake of the Viet Nam war some separation was put into place. Of course the resulting separation hasn't cleaned things up as expected; in many ways it's worse by shifting out of the technical and into the political domains.
According to [1] (easiest to just download the spreadsheet) the US defense budget for "Research, Development, Test, and Evaluation," adjusted for inflation, peaked in 2008-10 at about $80 billion and has been at a downward slide with 2015 estimated at $63 billion. The entire defense budget took a huge hit with the 2013 sequestration [2] and fell by $9 billion (over 10%) in a single year. NASA's budget in 1965-66 was more than 4% of the federal budget but now it's less than half a percent with its operations torn across dozens of Congressional districts and DARPA has only managed to avoid this bureaucratic creep because its structure is unique among federal agencies.
Combined with privatization that hasn't yielded any clear benefit, pork-barreling that cripple organizations, and general mismanagement, the falling budgets have probably had a very negative impact on US R&D capabilities, offset only slightly by modest increases in NSF and NIH grants for academia.
When organizations 'need stuff badly' - everything moves much, much faster and things get done.
In the Canadian Army, we could not move our troops around effectively, and 'procurement' for troop transports was taking decades. As soon as the engagement started in Afghanistan, we dropped our 'special needs' and just bought Chinooks from the US, off the shelf. Done. It's not exactly R&D, but it highlights the bureaucracy of such organizations.
I think 'necessity' will drive outcomes far greater than small variations in budget.
Consider this: 'good ideas' should be the result of 'problem analysis' - that is to say, they should be 'solutions' to existing problems. In academia in particular, they're doing a lot of 'pure' research, not so much focused on pragmatic things.
Facebook, for all of it's rubbish, is still a 'very useful thing' to a very large number of people. I don't think the very concept of Facebook lends itself well to academic ideals. Nobody would have considered such a thing a 'viable good idea' from an intellectual perspective. And yet, it really is useful.
The best ideas come from understanding where 'pain points' are and solving them, ergo, an understanding of 'the system to improve' is essential.
GSM, which took over most of the world, was designed in Europe. It likely was based on earlier research, very probably intersecting with military-funded research.
I wonder if the typical "towers + backbone + terminals" setup is relevant for battlefield communication. I'd expect mesh setups to be more viable and more resilient.
GPS was (is) a lot military. Militaries , first responders and aviation still depend on VHF and in cases HF radio.
Motorola was (is?) a serious player in VHF and HF, so the first cell phone probably wasn't directly DARPA but the overall company had a lot of background with that sort of thing.
A lot of research ends up supporting or augmenting existing technologies and industries -- so-called "applied" research. This research shouldn't be expected to yield major game-changing innovations.
A lot of the GDP growth over that period was fueled by technological capabilities. Which means a lot of the new research funding ended up going toward applied research supporting industries build around those new capabilities.
Given these trends, measuring against GDP is a far more reasonable metric than absolute dollar values.
Basically, research in the past made new industries possible while providing a bit of subsidy to existing industries, whereas research today is mostly government subsidy for product improvements in existing industries with a little bit of basic research on the side...
I'm honestly not qualified to know which data set is most accurate; certainly the increase is less dramatic using this data set, but it still shows an increase in $$ spent over time - vs the decrease that was the foundation of the original comment.
I'm qualified. Acid test: do you really think we were spending 1/17th of research money in constant dollars in 1970?
Percentage of GDP spent on research has declined. The US was 200M people in 1970. We're 320M now. Per capita research spending in constant dollars has declined as well.
Also, I think the point of the paper is that growth gets harder, not easier. From that perspective R&D spending should have increased massively in constant currency to keep pace, not decline even slightly.
Looking at it from the other side of the mirror, there's probably some other suite of factors that causes both things. Taken as a prisoner's dilemma, defecting from R&D may be perceived to have a payoff ( those engineers talk funny and cost a lot ) which saps R&D and also slows growth.
I think that in an increasingly specialized workforce, the management team has less bandwidth to apply to R&D. One stump hit and it's done for. Successfully leading R&D has little payback in the larger economy; leading M&A is better understood. This is a corollary of the general "tower of Babel" problem.
Remember that Amazon is slightly a tribe of madmen to most business types in large companies.
The use of the term in corporate spheres is also diluted.
According to the "best and brightest", R&D is now nothing more than patent pursuit. If the systems used for production are old and buggy, fixing that ( including significant new development ) is not considered "R&D", no matter how proprietary said systems may be.
That's WorldBankData. I'm fairly certain that Current US$ is unadjusted. If you use their constant 2010 dollars you get: 96 billion on research in 1970 and 126 billion in 2014
Using an inflation calc [1] that amount from 1970 today would be worth approximately $6,273,872,679,045. So that would be $125,477,453,580 spent on R&D, and we actually spend close to $132,600,000,000.
So it's seems to have been a set amount just adjusted for inflation the whole time.
It's not that exactly. It's that investors are more interested in making money than "ideas". And they feel that people who have a good track record of delivering results are more likely to make them money than unknown an team with a great idea.
Part of being a good management team (which is what VC's are looking for after all) is being able to sell. Selling the concept to investors, selling the product to customers, selling benefits to partners. People's history, credentials, contacts, etc. are all part of how you sell.
Just think about how hard it is to determine if someone can do a specific, relatively narrow, job when hiring. Then imagine how hard it is to determine if a team can work together, be persistent enough to see things through but flexible enough to roll with the punches as needed. To assess if the team can deliver a product, manage their finances, hire a good crew, craft a compelling marketing message, sell to customers, sound good to the media, etc, etc. This is not easy. Is it any wonder that VC's put great stock in past successes of any kind?
Maybe this isn't fair. But it's how the world works. Sorry for rambling this morning :-)
It's not that exactly. It's that investors are more interested in making money than "ideas". And they feel that people who have a good track record of delivering results are more likely to make them money than unknown an team with a great idea.
This is post-hoc justification.
Google, Facebook, Reddit, and Twitter have been worth billions (Reddit the exception) for more than a decade. They've collectively innovated how much since 2005? They have a bunch of small improvements that anyone could have anticipated and purchased many of the good ideas that have come along (Occulus). Some are floundering in spite of having massive reach (Twitter), some can't anticipate uses of their platform until after it happens (Facebook). Some are dead and their founders have joined the VC game (Digg). Did Kevin Rose not look like a pretty sure bet in 2006? Whatever happened to Milk?
Yet according to the "they know how to deliver" logic, they should be knocking new ideas out of the park. Where are they? Of course they have some successes but you could have some successes too if you have billions of dollars to subsidize yourself and an existing social network to support yourself and advertise your success. You throw ideas at a wall and the ones that work are evidence of know how. It's a broken metric.
Honestly, there is no evidence these companies are well run except for the fact that they won, but they effectively won a lottery at which point money starts flowing in. And that's the entire VC game - you'll have some moderate successes, mostly failures, and some unicorns. As long as you hit jackpot a few times you are ok. Meanwhile, part of the VC job is selling the idea that they know what they are doing, too.
And with hundreds of thousands of the world's smartest kids being funneled through the top colleges each year, you've got a lot of lottery tickets.
I think it's interesting that all of these authors are at Stanford or MIT. Seems similar to the backgrounds of a lot of tech talent:
Gates (Harvard), Zuckerberg (Harvard), Brin and Page (Stanford), Reddit guys (Cambridge area) you could go on and on.
Maybe there would be more diversity of ideas if the people behind them weren't all coming from similar backgrounds and self-selecting to certain locales...
I suspect that buying something that is a good idea and bring it to commercial success is what an accomplished founder should be best at doing. A successful founder is usually good at telling a better idea from a worse idea ahead of time, when the competition hasn't yet done the same.
A successful founder is usually good at execution, so the next logical step is to bring the idea to fruition. OTOH coming up with a new and worthy idea by oneself is much harder: most good-looking ideas do not survive a contact with the reality.
This is why Google came up with Maps and Android, both hugely successful, by acquisition. This is why Facebook got its iconic "Like" functionality by acquisition.
As you noted, coming up with an idea and following it through to making millions off it is a lottery. Acquiring a good idea in a semi-product shape and making millions off it is also a lottery, but with much better odds. Guess what an intelligent person would choose more often, given the choice.
Do you have links on how or when Facebook got likes by acquisition?
I don't remember seeing this anywhere, but it might as well have happened. And if it did, it's weird how we can forget that Facebook didn't always had a like button, seeing as how it's a fundamental part of their platform.
Also good ideas with a lot of value don't imply good businesses. As an example, just look at GitHub financial numbers. Another issue is the scale that makes your idea profitable, it is almost impossible to sustain growth like Facebook or Amazon did before you make a dime and they were excellent ideas with great business execution. In the past a music concert with 10k people will be enough to sustain artists, now you can have one million viewers in YouTube and receive a tiny amount in comparison.
Also good ideas with a lot of value don't imply good businesses. As an example, just look at GitHub financial numbers.
A lot of that is of their own doing. Github is a great business; they just got greedy. Interesting if they should fail, their failure will be cited as evidence that you can't make money hosting code (you already are).
If they didn't take on millions and keep hiring engineers like crazy, they'd be doing just fine. They wouldn't be losing $66 million hosting a code repository anyway....
> A lot of that is of their own doing. Github is a great business; they just got greedy.
Amazon was/is greedier and look at where they are now. My central point is that great businesses are just a small set within great ideas with lot of value. One simple example is what happened with software tools in Windows. There are great tools that provide a lot of value for the users but people are use to free software and almost nobody is buying them.
I think the fact that people with good track records are more likely to get a good ROI is mostly a self-fulfilling prophecy.
Everyone buys into the same illusion; that because someone succeeded before, that means they must have the secret sauce for success. In doing so, they forget the thing that actually matters; the product/service.
Not only that many good ideas don't yield the sort of returns investors are looking for... so they never have a chance to get funded other than through grants and those a relatively a small size of the market. Many things are not profitable but still good to have, and since they can't sustain themselves they die... to be replaced by other things that are profitable. That's the direction in which the market evolves. Governments in general think this is a good thing, and perhaps it is... but not for good things that don't exist because of the mechanics of the market.
I agree but would expand it to humans in general rather than just investors. Look at the top charts in the mobile app stores for examples (with a few exceptions).
> poorly-funded good ideas can't compete against well-funded bad ideas.
Nonsense. In the U.S., if the idea is worth anything, then it's surely worth spending ~$1500 to file a Provisional Patent. That Provisional Patent gives you 1-year to determine whether the idea is feasible. Talk to investors, see if there's interest. Nothing proves interest better than an investor willing to invest real money. If no interest, take that as an indicator and move on.
> Investors are more interested in funding terrible ideas from people they know than great ideas from people they don't know.
Nonsense. Investors are looking to make money. Give them an feasible idea that is somewhat backed with a guarantee that the money they fork over for you to spend on developing the idea won't simply be copied - and the investor will take that over a terrible idea with someone they know.
> The market is monopolized on every side and good ideas can't compete against sheer capital.
Nonsense. Even if the biz ends up collapsing, the patents can still be sold or retained. Many here will not like that idea - but the reality is the patents provide the investor with some comfort especially if the idea is worth anything.
This is how it should work but it's not how it does work.
If you approach an investor with the idea for a "bridge", they won't fund it because there's no proof people would use a bridge, and you can't prove it until you're funded. End of story. No bridge.
Give them an feasible idea that is somewhat backed with a guarantee that the money they fork over for you to spend on developing the idea won't simply be copied - and the investor will take that over a terrible idea with someone they know.
Please, point me to this investor. I think your hope of what it should be masks the reality of what it is. I'm with you; I wish it was that way too.
By bridge I assume you literally mean a traffic or pedestrian bridge. Both are expensive and beyond what I read the OP as meaning by "funding an idea". Given this site, I read OP more as a software or hardware guy wanting to build something and get funding for it - something much smaller and less regulatory hurdles than an actual physical bridge.
But, let's assume he did mean a complex, expensive bridge. If it truly was worthwhile location to build a new bridge, then why would it be hard to show that people would use it - it's more a matter of measuring existing traffic on other nearby bridges etc...
Sorry, I was talking symbolically, that if the concept of a bridge didn't exist and you wanted to show how valuable it would be, it would be difficult without an implementation. History is littered with ideas that are not obvious to consumers until they start using them. The automobile was considered a toy for more than a decade after its invention, for example.
I'm specifically talking about this Catch-22: that you often can't create something revolutionary (e.g. build the first bridge) without investment and you can't get investment until it's built and demonstrating its worth.
Your initial statement seems to indicate that it's relatively easy to raise funding if you have a 'good idea', yet a good idea is usually considered the modification of an existing idea ("Facebook with Twitter, but for dogs"). A revolutionary idea won't be as accessible (in terms of understanding) by investors or even initially consumers, and won't get funding for that reason. The irony is that it's the revolutionary ideas that everyone is seeking, yet the way funding works often precludes it, rather than encourages it, from happening.
> wanted to show how valuable it would be, it would be difficult without an implementation
So implement it then. In the context of this site, we're talking software or hardware - what's it going to take - a weekend, a month, or a few months - nothing impossible or out of reason if it's a good idea and worth it.
Something revolutionary? No. Software like that would take more than one person over a few weekends. And that's the catch.
You said, "Investors are looking to make money," and you're right. The problem I'm trying (and failing) to articulate here is that true revolution in software or hardware is not always something you can do over a weekend, and it's not something the customer might request (that works better for variations of existing products).
Governments and large companies used to have bigger budgets to fund such research; they don't do as much of that anymore, so it's left to private investors, who understandably expect a clear return on their investment, but that precludes a deeper type of innovation in favor of shallower innovation.
We would rather gather fruit than plant seeds. Gathering fruit is where the money is, so it seems to make sense. But that's short term borrowing from the long term. We're facing barren trees and scratching our heads. We're not out of ideas -- we just haven't invested in them.
Politely disagree. If you're raising funding before talking with potential customers, then you might be doing it wrong. (Depends slightly on the product, but generally true in my experience.) People who supply funding also want to know that customers want your product.
This is correct. But the cost of filing a provisional is $130 for a small entity. Maybe it's $1500 if you have an IP firm do it. Writing a provisional is well within the grasp of an engineer. Claims in a nonprovisional application, well that is for a specialist.
What defines a good idea?
From an investor point of view, a good idea is one that returns on monetary investment.
I'd argue that an idea can't be very good if it doesn't have at least a neutral ROI on execution.
But that's the job of the investors, they don't try to hit the returns with every idea. What they look is a capable team who can pivot if that's what it takes to make a compelling product used by millions.
After close to 8 years with zero or negative interest rates (QE is a form of negative interest rates), there's 2 things causing this:
Anything -reasonable and otherwise- has been tried. Mostly on the taxpayer's dime.
A number of industries, most famously big parts of the US oil industry, but they won't be alone, are bad ideas that people really, really, really want to see working. Therefore those companies have been using capital injections for profits. It should be called a Ponzi scheme, but in some ways it isn't.
A lot of companies have been using credit and capital to do what every economist assumes they use profits and free cash flow for : pay dividends and share buybacks. In reality for quite a few companies free cash flow has been steadily worsening since 2015 or so.
So we have the double whammy of the government making it VERY cheap for huge companies to try every idea under the sun, which of course they have partially used to make it impossible for others to try (e.g. Uber, Deliveroo and Domino's pizzas have used capital to beat every reasonable delivery company on price, the idea being to become the next Amazon, and of course, they have not become the next Amazon. And I would say that Amazon itself is also a "become monopoly - then jack up prices" play that so far hasn't even succeeded in part 1 of that. Their stock has done well, not because Bezos has kept his promise, but because of his reports of progress).
So has every idea been tried ? Of course not. But due to sustained easy credit it has become very hard to try new ideas. Those of us working for huge companies best prepare for a few leaner years.
> But due to sustained easy credit it has become very hard to try new ideas.
If anything low interest rates and easy credit make it easier to raise capital and explore new ideas.
Unfortunately when rates are kept artificially low to avoid financial catastrophe, resources are misdirected into ventures that should never have been funded in the first place, and we kick the financial can down the road. As they say, necessity is the mother of invention, and right now we need necessity (resource constraints/challenges to the status quo) more than ever.
>Unfortunately when rates are kept artificially low to avoid financial catastrophe, resources are misdirected into ventures that should never have been funded in the first place, and we kick the financial can down the road.
Untrue. Rates are not kept low for an explicit purpose of "avoiding catastrophe." They are kept low to encourage borrowing.
Resources are never "misdirected," as both the lender and the borrower must agree on the venture. This system allows for experimentation and innovation. As someone sitting on the sidelines, it is easy to call many investments - usually after they fail to produce ROI - failures that should not have been funded in the first place.
However, the economy is a cyclical process. Your comment projects your thinking that it works as an equilibrium, which has never been the case. Eventually, speculation dies down, rates increase, borrowing deceases, saving increases, companies repair balance sheets... etc.
QE, in conjunction with the Fed dropping FRR to 0% for 6 years, was a direct response to the financial collapse of 2007-8. Obviously policymakers weren't going to explicitly say - "wow guys we screwed the pooch on this one, time to lower rates and inject liquidity to help banks (and the business/individuals they lent to) clean up their balance sheets full of mispriced, risky assets and avoid financial catastrophe" - but sometimes you need to read between the lines.
There is being (equilibrium) and there is becoming (cycles away from/towards equilibrium). The economy is always becoming, but it is striving towards a pure state of equilibrium where supply and demand in every market are working to equal one another, but never getting there.
> Resources are never "misdirected," as both the lender and the borrower must agree on the venture.
That's not what "misdirected" means. In economics, "misdirected" investment is investment into unproductive things. It doesn't matter whether the lender and borrower agreed with a bad idea; if it's a bad (unproductive) idea, the investment in that idea is still misdirected.
Now, you're right that after the fact, it's easy to call what wasn't clear at the time. But the flip side is also true: In the heat of the moment (when you think there's a lot of money to be made), it can be hard to see what will be blindingly obvious after the fact - that this is actually a really bad investment.
Low rates increase borrowing, which tends to lead to asset inflation, which in turn will make the rich richer. That in turn might incentives some to put more of their excess capital to work in riskier ventures.
Eh... "those who have the gold make the rules", including what kind of ideas to pursue.
American, and SV in particiular, tends to be very predictable and focuses and exploits ideas that capitalize on labor and work already available in some form. Examples are the so-called "innovations" in social networking, search advertisements, automated profiling (presumably for advertis etc), open source, yada yada yada
I disagree on Amazon. They have pricing power (many consumers won't price compare), and they are profitable. They may be heavily investing in new businesses with their profits, but the main business is still throwing off cash.
Also , with regards to retailers , pricing power looks quite different - raising profit margins from 2.5%(like walmart's, a great business) to say 5% doubles your profit.
Can Amazon create enough incentives for people to pay that extra 2.5% ? probably. And even if you're a penny pinching customer, without prime etc, Amazon can offer you a personalized price, comeptitive with everybody.
Personalized prices are difficult to manage, partly because they can be gamed, and partly because it's bad PR.
I'm trying to enable it on a larger scale with https://icanpriceit.com/. But that's explicitly opt in: you name the price, and understand that you won't get a better price than you put in. You're trading the possibility of even lower prices for the ability to name your own price and have us fulfil it if it's within our minimum margin.
I can't see Amazon doing it, at least not within the next couple of years.
>I would say that Amazon itself is also a "become monopoly - then jack up prices" play
I think this is an important misconception. Amazon, Uber, Lyft, Grubhub, and pretty much the entire on-demand economy doesn't intend to raise prices.
They expect technology to reduce costs while they keep prices fixed. The details aren't even important, but there's a consensus out there that transportation is going to _somehow_ get a lot cheaper. The two main driving forces seem to be self driving vehicles (duh) and electric vehicles. EVs are expensive luxuries/green tech now, but as the battery costs keep dropping you'll soon have extraordinarily low maintenance, cheap to run vehicles that are way more efficient than internal combustion in the low speed, stop and go regime of delivery driving.
That's how I've always seen these markets anyway. It just doesn't make sense for the plan to be "monopolize then exploit" - short of hardcore regulatory capture or something that seems too transparently flawed to attract so much effort and investment.
Incorrect. This kind of thinking is what gets central banks throwing money at problems, expecting results, and seeing none. QE is literally buying bonds.
QE is fiscal policy; rate increases/decreases are monetary policies.
Didn't read the rest of your post. I'm sure you have a point. But the above misconception is troublesome and dangerous if pushed upward into policy.
This isn't a new idea. The real peak for invention was around 1880-1900. Steel production finally worked, electricity was known, the basic machine tools existed, and steam power was working well. That's when Edison was most active. His "invention factory" had a goal of a minor invention every two days and a major invention every three weeks. There were so many easy hits available waiting to be invented, and the tools were there to build them.
Some technologies develop rapidly and then hit a wall. Aviation developed rapidly from the Wright's first flight in 1903 to the late 1960s, which produced the Boeing 747 and 737 (both still in production), the Concorde, the SR-71, the C-5A,and the Saturn V. Since then, improvements have been minor by comparison. Yet technical documents from the 1960s project vast improvements - hypersonic ballistic transports, single-stage-to-orbit spacecraft, atomic-powered interplanetary rockets, and fusion drives. Even antigravity was studied seriously. Didn't happen.
In aviation, it all slowed down after the tech started to require larger and larger investments to research, combined with hitting harder boundaries on fundamental energy and propulsion capabilities. Individuals were priced out because of the increasing capital requirements, and institutions (gov't and large companies) are so bad at making non-sure-thing bets that individuals might otherwise pursue. Hence why rocketry is only advancing now after some random billionaires came along to make investments, which as financial bets would be disqualified in a heartbeat by most of the people holding the strings of business investment.
There's still capability to be gotten in propulsion, but I suspect a lot of it will revolve around basic materials research (which is definitely unattractive to the vast majority of potential investors).
I worry that the aviation market is an example of how modern capitalism is failing to systematically advance civilization (even apart from the more political class inequality problems that is being argued about...).
Edit: and the failure is in any endeavor that doesn't fit typically short investment time scales and 'calculability' of ROI, not just aerospace.
Not just one, but a number of "random billionaires" suddenly started to invest in rockets and spacecraft during the last decade.
I'd rather link it to something else: maybe CAD advancements, maybe material science, maybe avionics, maybe the release of some regulatory limitations. Likely all of these in concert.
Sure, but my main point was that the advancement of civilization shouldn't depend on the whims of random billionaires. We shouldn't prevent random billionaires from developing the tech either, but multi-decade trend of politics and society believing that "capitalism" provides a complete economic system has big holes in it for certain advancements: space travel, for antibiotics, for civilization-ending externalities like climate change. The pileup of what capitalism isn't addressing is getting very serious. And if you're being quantitative, skipping all/most long-term, low-visibility-payoff risks still leaves your economy behind one that is able to make those investments -- at least some of the time, in favor of the next social-chat app, or k-cup juice tech.
And incidentally - IMHO, I strongly suspect there is nothing in avionics algorithmically or computationally that modern rocketry needs that didn't not already exist 30-40+ years ago (then again, I don't closely follow that anymore...) One of the key structural advancements for SpaceX rockets - was created 20+ years ago - friction stir welding... I may be too cynical, but really the only real advancement there is parties able & willing to take on long term risks.
For me it is interesting to see that new ideas may be harder to find. My experience is that there are lots of great ideas that are just lying around, unused. In some of our work we are taking scientific or technical discoveries from the early 1900s and implementing them in new ways, in combination with mobile phones, for example, to make breakthroughs in possible uses.
We are building new water quality tests that makes the phone a "mobile lab" using quite old science and targeting it at a huge market, the two billion people that drink unsafe water. It isn't your normal Silicon Valley play, as it requires quite a different set of resources to implement well. But the impact can be massive.
I'm surprised that amid the sea of comments lamenting the demise of individual experimentation, nobody has mentioned the vastly stricter regulatory environment surrounding individual experimentation. You can't buy any chemical equipment because you might be making drugs. Mechanical equipment might be used to make bombs or guns. For now I guess you might be able to get away with making transgenic bacteria in your garage, but it's only one or two media panics away from being banned because you might accidentally make grey goo. Plus, if you do try to do any sort of substantial experimentation, be prepared for your neighbors to report you to the police and your HOA to fine you for smell and noise.
Orville Wright would be behind bars today. Jonas Salk would be a public enemy. Alex Fleming would have been kicked out of grad school for not properly sterilizing equipment.
Of course, you can buy seventeen kinds of homeopathic medicine at Walgreens. Everyone knows it doesn't work, but they also know it's harmless, which in the Rawlsian dystopia of a life lived behind the veil of ignorance, is the only thing that matters.
I've always wondered about this aspect of idea research (from the intro in the paper):
"For example, each new idea raises incomes by a constant percentage (on average), rather than by a certain
number of dollars. This is the standard approach in the quality ladder literature on growth: ideas are proportional improvements in productivity"
I am not convinced they capture efficiency improvements (or externalizations) in this model. Say for example you have an idea for scrubbing sulfur out of coal. It becomes mandated by the EPA, it makes everyone around the coal power plants live 1.5% longer but it doesn't change their income at all. Was it a good idea? Was it an important idea?
We have invested billions in improving efficiency, from solar panels to cars to power plants. How much of that has offset income gains from growth? For example, you add 25% more cars on the road that are 50% more efficient, the net total of gas consumed annually stays the same. So how is that represented in economic GDP ? When you have a video game that is distributed digitally, it costs a fraction of a cent in electricity to "manufacture" and copy to the user. But it represents $1 - $10 in economic activity.
So clearly I've got problems with the definition of a "good" idea :-) I don't like tying it to income improvements. I'd much rather tie to a balance of externalities where the net change is fewer negative externalities. Harder to measure for sure, but ideas that increase efficiency will get as much "goodness" as ideas that increase income.
If we actually included the externalities of coal, the damage to property by acid rain, the health effects, the tearing the tops off our mountains to strip mine - then we could see things like scrubbing sulfur out as efficiency improvements.
This is one reason why many people think a technological singularity will never come. The rate of innovation isn't an exponential curve but rather an S-curve which will eventually flatten out. That doesn't mean that innovation will cease, but rather that the rate of innovation (the first derivative) will stop increasing.
That's true, for any area of technology. It may not be true for technology as a whole, though.
Take electricity, for example. It was climbing rapidly in from 1900 to maybe 1950. After that it started flattening out. But the transistor was growing from 1950. One could argue that it started flattening out in 2010. It's still growing, but not as much. But genomics is just getting started. And so it goes, with the action moving from one area of technology to another.
Different areas of technology have profoundly different dynamics and interactions.
Much of the 19th century boom drove directly off of coal and what it made possible (steel, electricity, lights, electric motors, locomotives, mang), and toward the end, petroleum and automobiles. Henry Ford's Model T began production in 1901, just after the end of the era, and the Wright Brother's flight was in 1903, enabled by low-weight, high-oputput gasoline engines (we've improved considerably since then).
Vaclav Smil's Energy in World History provides a really good illustration of various technolgies and capabilities, many with semi-log plots (log power vs. time). They show the overlapping-band dynamics you mention, but that applies pretty much exclusively to energy technologies.
The thing about information technology is that it only buys you so much. The Boeing 747, and for that matter, the Dreamliner, both still continue the design originally pioneered with the 707 (and its bomber/transport predecessors). We've added avionics and controls and engine tuning and a slew of other elements which are far more advanced than was was available in the 1950s and 1960s when the airframes were first produced. There's been a slight improvement in fuel efficiency, enough that today's turbofans are roughly comparable with prop-driven planes of the 1950s (props are more efficient, but slower, than jets). There's been a big increase in safety, witness the HN article a few days back noting that it's been 15 years since a major widebody jet fatality by a US carrier (smaller planes and non-US carriers, yes). Actual travel times have increased since the 1970s, due mostly to security checks, also some congestion and routing changes. Prices are down from the late 1970s, but not from the pre-oil-embargo days, at least not by nearly as much. (The airline deregulation story gets hugely oversold on that point.)
Genomics is another form of information processing, ultimately. It's not clear to e that it will have the same impacts as, say, clean fresh water supplies, sewerage, municipal waste removal, and public health and nutrition programs. Certainly not at the same costs.
The basic problem is cultural fragmentation of the middle class who are the main market for any product (the middle classes are the only class with the money and size to be worth mass selling).
There is so much more diversity now in what middle class people want that it is impossible for one idea to appeal to a large section of the population. When everyone middle class listened to same music, watched the same TV shows, saw the same movies, ate the same food, drove the same cars, lived in the same houses, etc it was possible to create a product that appealed to most. Today that is no longer possible so all ideas are small and niche.
Regulation and risk avoidance has also played a strong role in killing ideas. For example, most of pharmaceuticals developed in the 1950 to 1970s could not be brought to market today.
I while ago I was thinking about Leonardo da Vinci and how today there really aren't any "renaissance" individuals. By "renaissance" individual I mean multidisciplinary prolific geniuses.
I sort of came to the conclusion that the paper proposes... new ideas are harder to find. To really make a breakthrough in a field seems to require intense specialization.
It is sort of sad because some of the greatest innovators have traditional started in some other field and often use the power of analogy thinking to come up with new ideas.
There are surely such individuals alive today and others that have been between now and then. They may not be as visible or, if they are, are rarely presented in the same domain as leonardo. I think it's better to say that no one alive today is as fetishized for their multi-disciplinary capibility as Da Vinci.
It's also not hard to argue that we have more surface area for innovation today as well a society that offers the 'resources' (i say this broadly) for individuals to pursue explorations and discovery.
It is fairly hard to measure quality and I guess to some extent quantity of ideas coming from individuals and I agree Da Vinci is vastly overrated.
But do you honestly think there are individuals alive today making extreme broad breakthroughs like Isaac Newton?
Part of the reason I believe previous scholars were so successful in coming up with a plethora of ideas is that it was easier to observe things. For example Galileo just needed some really good glass aligned properly to see planets but today the amount of manpower needed to see into the cosmos far exceeds a single individual and their budget. To detect neutrinos required a multi story tank filled with water with extreme precision instruments. Meanwhile Newton took some glass (glass was damn useful back then... it was like the computer for innovation) and noticed light could be broken into colors.
Of course but they are not broad ideas like creating a new type of math while explaining how planets circle the sun kind of ideas.
Like I said earlier I don't know how you can place the value or rate ideas but please point out some individuals making cross disciplinary breakthroughs at the level of Newton today. I'm actually sincere because I would like to know more about these individuals.
The big issue I have with the "solitary genius" theory is that ideas do not occur in a vacuum Isaac Newton who several people in this thread have mentioned went to great lengths himself to dispel this myth. One of his most famous quotes is:
"If I have seen further, it is by standing on the shoulders of giants."
Scientific progress is incremental it builds on the work and ideas that have come before.
"we have more surface area for innovation today as well a society that offers the 'resources' (i say this broadly) for individuals to pursue explorations and discovery"
yeah, it's not obvious that the thesis here is true, in fact I would suggest that its probably not true
the surface area of new idea is probably expanding, and the resources for finding new ideas is almost certainly expanding as well
------------
------------
seems like the better argument is that ideas in the future will be in different idea spaces than the current ideas
and that within a specific idea space, you can exhaust the ideas
As Alan Kay mentions in one of his videos, da Vinci had no access to an engine. All his designs were on paper. If he had an engine available, he might have made a lot of narrow progress, instead of being so broad.
In fact, in that video Kay compares da Vinci to Henry Ford. Ford was hardly a scientific or design genius, but he had access to the right equipment and had a much greater impact on the world than Leonardo.
I think one thing that really stand out is that the real trick is to find the real problem that hide underneath the obvious solutions. And they might be harder to find because entrepreneurs especially in Silicon Valley are young men who don't have much experience in what problems exist out there and is worth solving.
> We present a wide range of evidence from various industries, products, and firms showing that research effort is rising substantially while research productivity is declining sharply.
I think this is a natural course of things that is also suggested by "law of diminishing returns" [1]. As I understand, it's unlikely to maintain the growth multiple with the innovation even if number of people involved in this field is constantly rising. What matters more, however, is if the returns in absolute numbers are greater than what they were before.
Making a rough assumption that number of computers in 1970s were 100,000, a performance gain of 10X would have meant a return of 1,000,000. Supposing that number of computers have multiplied by 100,000, even a gain of 1% beats the old figure by long margin.
I guess there could be time when gain in absolute numbers won't be significant to justify the added costs but in today's world, it matters to squeeze out small performance gains as long as more and more people are using products of those industries.
No, people are just too attached to good ideas requiring an acquisition and/or lots of cash. Some ideas might not have a cash or business value. I hope we're not training the next generation to attach money and ideas too much.
I found the problem with inventions (personal projects) is, you have to maintain them after. And once you're maintaining too much you've run out of free time for the rest of your life - I've been running one for ten years that is famously unchanged and isn't going to.
If you're just talking about tech, the life pattern of forcing everyone in the industry to move to Silicon Valley and work all day on someone else's ideas to cover rent can't be helping.
I find there's a lack of clarity on just what technology is, and how the specific mechanisms incorporated into it operate. I've been constructing an ontology and identify nine discrete areas. Each has its own capabilities, limitations, and consequences. Moore's Law applies to only a very limited subset of dendritic and networked structures.
That itself is not a new idea. Most of the ideas which are easy to find and cheap to exploit have already been found. A possible declining trend for worldwide innovation (http://accelerating.org/articles/InnovationHuebnerTFSC2005.p...), published in 2005, arrives at a similar conclusion.
This video [1] has a great explanation and exploration of why innovation seems to be slowing down. He uses data coming from the patent system over a period of 100 years.
The video touches on much of what is in this study and does a great job of explaining it.
Whenever the COST trend of something is being considered, it must be balanced with the RETURN trend.
It might cost more to double a transistors capabilities (as a combination of size, speed, manufacturing reliability, etc.) but as transistors not only improve current applications, but take on whole new areas of usefulness, the returns on investment may actually go up.
I am not saying they do or do not. But COSTS in isolation of RETURNS can't be used to conclude anything about the effectiveness of research efforts.
Breakthrough ideas are made by a diffrent type of mind then the usual one. Chaotic,messy..recombinig stuff not expected to be recombined.
People who whoreship structure and efficiency murderd this one.
OK, how about "The advancement of the arts, from year to year, taxes our credulity and seems to presage the arrival of that period when human improvement must end." -- Patent Office Commissioner, Henry Ellsworth to Congress in 1843.
My point was that this impression is human nature and thus suspect.
"But Commissioner Ellsworth was simply using a bit of rhetorical flourish to emphasize the growing number of patents as presented in the rest of the report. He even outlined specific areas in which he expected patent activity to increase in the future."
Pretty simple production frontiers result. IT moved the production possibilities frontier outward about 30 years ago. Everything experiences diminishing returns.
PS: I especially like Godwin's law: when you confidently state something inaccurate online, someone's bound to correct you. It's a great way to crowdsource an answer that you might have a hard time looking up.
[1] http://www.bu.edu/research/articles/funding-for-scientific-r...