I like Steve's content, but the ending misses the mark.
With the carriage / car situation, individual transportation is their core business, and most companies are not in the field of Artificial Intelligence.
I say this as someone who has worked for 7 years implementing AI research for production, from automated hardware testing to accessibility for nonverbals: I don't think founders need to obsess even more than they do now about implementing AI, especially in the front end.
This AI hype cycle is missing the mark by building ChatGPT-like bots and buttons with sparkles that perform single OpenAI API calls. AI applications are not a new thing, they have always been here, now they are just more accessible.
The best AI applications are beneath the surface to empower users, Jeff Bezos says that (in 2016!)[1]. You don't see AI as a chatbot in Amazon, you see it for "demand forecasting, product search ranking, product and deals recommendations, merchandising placements, fraud detection, translations."
"With the carriage / car situation, individual transportation is their core business, and most companies are not in the field of Artificial Intelligence."
I'm missing something here. First, I thought Steve's point was that the carriage makers did not see "individual transportation" as their business, and they should have--if they had, they might have pivoted like Studebaker did.
So if "most companies are not in the field of Artificial Intelligence", that could mean that they ought to be.
However, I draw a somewhat different conclusion: the business that companies ranging from Newsweek to accountants to universities to companies' HR departments should see themselves in is intelligence, regardless of whether that's artificial or otherwise. The question then becomes which supplies that intelligence better: humans or LLM-type AI (or some combination thereof)? I'm not at all sure that the answer at present is LLM-AI, but it is a different question, and the answer may well be different in the near future.
There are of course other kinds of AI, as you (jampa) mention. In other words, AI is not (for now) one thing; LLMs are just one kind of AI.
Commercial endeavors exist to provide goods and services to consumer and users.
The implication of the author here is that those providing services that continue using human resources rather than AI, are potentially acting like carriage manufacturers.
Of course that assumes improvements in technology, which is not guaranteed.
First, I thought Steve's point was that the carriage makers did not see "individual transportation" as their business, and they should have--if they had, they might have pivoted like Studebaker did
But all 400+ carriage maker had pivoted, would they have had a chance to survive very long? Would they have all made more money pivoting? The idea that all this is only a "lack of vision" rather than hard business choices is kind of annoying.
This. Carmaking is not viable on small scale the way carriage making is. If they all pivoted, perhaps 10 instead of 1 would have survived through 1929, the fate of all others would be the same - except staying carriage makers till the end they at least continued to extract profits, trying to all become carmakers they'd waste that money into retooling and retraining and whatnot and never made it back.
This is a different way of saying, people must learn how to use a new technology. I think like cars, radio, internet or smart phones. It took a while for people to understand somethings are so disruptive, eventually it will find a way into your life in all forms.
Im guessing for someone in laundry or restaurant business it might be hard to understand how AI could change their lives. And that is true, at least at this stage in the adoption and development of AI. But eventually it will find a way into their business in some form or the other.
There are stages to this. Pretty sure the first jobs to go will be the most easiest. This is the case with Software development too. When people say writing code has gotten easier, they really are talking about projects that were already easy to build getting even more easier. Harder parts of software development are still hard. Making changes to larger code bases with a huge user base comes with problems where writing code is kind of irrelevant. There are bigger issue to address like regression, testing, stability, quality, user adoption etc etc.
Second stage is of course once the easy stuff gets too easy to build. There is little incentive to build it. With modern building techniques we aren't building infinite huts, are we? We pivoted to building sky scrapers. I do believe most of AI's automation gains will be soaked up in the first wave and there will little incentive to build easy stuff and harder stuff will have more productivity demands from people than ever before.
I can strain the analogy just enough to get something useful from it.
If we laboriously create software shops in the classical way, and suddenly a new shop appears that is buggy, noisy, etc but eventually outperforms all other shops, then the progenitors of those new shops are going to succeed while the progenitors of these old shops are not going to make it.
It's a strain. The problem is AI is a new tech that replaces an entire process, not a product. Only when the process is the product (eg the process of moving people) does the analogy even come close to working.
I'd like to see analysis of what happened to the employees, blacksmiths, machinists, etc. Surely there are transferrable skills and many went on to work on automobiles?
This SE q implies there was some transition rather than chaos.
Stretching just a bit further, there might be a grain of truth to the "craftsman to assembly line worker" when AI becomes a much more mechanical way to produce, vs employing opinionated experts.
I agree as I point out in other comments here - you said it with more detail.
AGI + robot is way beyond a mere change in product conception or implementation. It's beyond craftsmen v. modern forms of manufacturing we sometimes read about with guns.
It is a strain indeed to get from cars v.buggies to AGI. I dare say that without AGI as part and parcel to AI the internalization of AI must be necessarily quite different.
> With the carriage / car situation, individual transportation is their core business, and most companies are not in the field of Artificial Intelligence.
Agreed. The analogy breaks down because the car disrupted a single vertical but AI is a horizontal, general-purpose technology.
I think this also explains why we're seeing "forced" adoption everywhere (e.g., the ubiquitous chatbot) -- as a result of:
1. Massive dose of FOMO from leadership terrified of falling behind
2. A fundamental lack of core competency. Many of these companies companies (I'm talking more than just tech) can't quickly and meaningfully integrate AI, so they just bolt on a product
There's a qualitative difference between ok transport and better transport vs AI.
If we're going to talk cars, I think what the Japanese did to the big three in the 1980s would have been far more on point.
AI is encumbered by AGI which is further encumbered by the delta between what is claimed possible (around the corner) and what is. That's a whole different ball game with wildly different risk/reward tradeoffs.
Learning about history post buggies didn't do much for me.
Mobility is not an analogy for AI, it's an analogy to whichever industry you work in. If you publish a magazine, you may think you're in the 'publishing' business and that AI as a weak competitor, maybe capable of squashing crappy blogs but not prestigious media like yours. But maybe what you're really in is the 'content' business, and you need to recognize that sooner or later, AI is going to beat you at the content game even if it couldn't beat you at the publishing game. The kicker being that there no longer exists a publishing game, because AI.
Or more likely, you are in the publishing business but the tech world unilaterally deemed everything creative to be a fungible commodity and undertook a multi-billion dollar campaign to ingest actual creative content and compete with everyone that creates it in the same market with cheap knockoffs. Our society predictably considers this progress because nothing that could potentially make that much money could possibly be problematic. We continue in the trend of thinking small amounts of good things are not as good as giant piles of crap if the crap can be made more cheaply.
Viewed from a different angle I think he's probably close. A service provider changing the back end while leaving the front end UI similar is not dissimilar to early cars being built like carriages. But when the product can shift from "give me an app that makes it easier to do my taxes" to "keep me current on my taxes and send me status updates" that's a pretty radical difference in what the customer sees.
> But when the product can shift from "give me an app that makes it easier to do my taxes" to "keep me current on my taxes and send me status updates" that's a pretty radical difference in what the customer sees.
For a bunch of stuff - banks, online shopping, booking a taxi, etc - this shift already happened with non-LLM-based "send me notifications of unusual account activity" or even the dead-simple "send me an email about every transaction on my bank account." Phone notifications moved it from email to built-into-the-OS even.
The "LLM hype cycle" tweak becomes something like "have an LLM summarize the email instead of just listing the three transactions" which is of dubious use to the average user.
No the shift hasn't happened yet at all. Let's take those examples one by one.
Banks: Normal retail customers are responsible for managing their account balances, importing transaction data into whatever book keeping system, downloading their tax forms for filing, adjusting their services and strategy based on whatever they're planning to do in their life etc. Private banking is a reasonable model for the service that everyone should get, but can't because it's too expensive.
Online shopping: Most people have to figure out what they're looking for, research the options, figure out where to order from, keep track of warranties, repairs, returns, recalls, maintenance, consumables, etc. Personal assistants can absorb most of that, but that's expensive.
Booking a taxi: On the same theme, for all the scheduled travel that should be booked and ready to go based on your calendar. Personal assistants can do this too, but again it's expensive.
The core ideas of giving the service provider context, guidance, and autonomy to work without regular intervention are not unique to automation but only recently is there a conceivable path to building software that can actually deliver.
> The best AI applications are beneath the surface to empower users
Not this time, tho. ChatGPT is the iphone moment for "AI" for the masses. And it was surprising and unexpected both for the experts / practitioners and said masses. Working with LLMs pre gpt3.5 was a mess, hackish and "in the background" but way way worse experience overall. Chatgpt made it happen just like the proverbial "you had me at scroll and pinch-to-zoom" moment in the iphone presentation.
The fact that we went from that 3.5 to whatever claude code thing you can use today is mental as well. And one of the main reasons we got here so fast is also "chatgpt-like bots and buttons with sparkles". The open-source community is ~6mo behind big lab SotA, and that's simply insane. I would not have predicted that 2 years ago, and I was deploying open-source LLMs (GPT-J was the first one I used live in a project) before chatgpt launched. It is insane!
You'll probably laugh at this, but a lot of fine-tuning experimentation and gains in the open source world (hell, maybe even at the big labs, but we'll never know) is from the "horny people" using local llms for erotica and stuff. I wouldn't dismiss anything that happens in this space. Having discovered the Internet in the 90s, and been there for every hype cycle in this space, this one is different, no matter how much anti-hype tokens get spent on this subject.
ChatGPT wasn’t the iphone moment, because the iphone wasn’t quickly forgotten.
Outside of software, most adult professionals in my network had a play with chatgpt and have long since abandoned their accounts. They can’t use chatbots for work (maybe data is sensitive, or their ‘knowledge work’ isn’t the kind that produces text output). Our native language is too poorly supported for life admin (no Gemini summaries or ‘help writing an email’). They just don’t have any obvious use case for LLMs in their life.
It’s tough because every CEO and VC is hyperventilating about LLMs as a paradigm shift for humanity when in reality they are useful but also so are gene editing and solid state batteries and mrna vaccines. It’s just that software innovations are much more attractive to certain groups with money.
"It’s tough because every CEO and VC [on LinkedIn and CNBC] is hyperventilating about LLMs as a paradigm shift for humanity"
I guess there's a quiet majority thing going on where the vast majority of businesses are just not integrating chatbots because their business is not generating text.
Not only that, there is active backlash for talking about ChatGPT in social circles now. Where as, I guess March 2023-ish it was the topic of conversation. Then when something new dropped it came up again and most people had used it and had an interesting story mainly about asking it for some sort of advice. Now when someone mentions it or tries to show you something it's mostly an eye roll and to the non-tech general user it hasn't made any major improvement since mid 2023. Most people I know are in fact complaining about the amount of crappy AI content and are actively opposed to it.
>>Outside of software, most adult professionals in my network had a play with chatgpt and have long since abandoned their accounts.
I know an architect, after much encouraging her to use it. She said ChatGPT most of the times would make bedroom window into a rest room. Its kind of hilarious because guessing the next word, and spatial thinking seem to be very different beasts altogether. And in some way might be two different tracks of intelligence. Like two different types of AGI.
A picture is better than thousand words - A saying.
My guess is a picture is better than a infinite words. How do you explain something as it exists, you can use as many words, phrases, metaphors and similes. But really is it possible to describe something in words and have two different people, or even a computer program not imagine it very differently?
Another way of looking at this is language itself might be several layers below intelligence. If you see you can go close but never accurate describe what you are thinking. If that is the case we are truly cooked and might never have AGI itself as there is only that far you can represent something you don't understand by guessing.
It may be true but Bezos' comment is also classic smoke blowing. "Oh well you can't see us using <newest hype machine> or quantify it's success but it's certainly in everything we do!"
But it’s completely true — Amazon undoubtedly has a pretty advanced logistics set up and certainly uses AI all over the place. Even if they’re not a big AI researcher.
There are a lot of great use cases for ML outside of chatbots
But also, like, how much of that is really "AI" in the general sense as it applies to things like ChatGPT today? Do you really need a massive resource intensive system for products recommendations and things related to Amazon's marketing.
The amazon store chatbot is mongst the worst implementations I've seen. The old UI which displayed the customer questions and allowed searching them was infinitely better.
FWIW, the old UI (which I agree is better) is still available. Once the "AI search" is done, there's a dropdown you can click and it will show all the reviews that include the word you searched.
Are you seriously suggesting the crappy AI bot on Amazon product pages is evidence of an ‘AI’ revolution? The thing sucks. If I’m ready to spend money on a product, it’s worth my time to do a traditional keyword search and quickly scroll through the search returns to get the contextualized information, rather then hoping an LLM will get it right.
Right. The point is that in frothy market conditions and a general low-integrity regime in business and politics there is a ton of incentive to exploit FOMO far beyond it's already "that's a stiff sip there" potency and this leads to otherwise sane and honest people getting caught up into doing concrete things today based on total speculation about technology that isn't even proposed yet. A good way to really understand this intuitively is to take the present-day intellectual and emotional charge out of it without loss of generality: we can go back and look at Moore's Law for example, and the history of how the sausage got made on reconciling a prediction of exponential growth with the realities of technological advance. It's a fascinating history, there's at least one great book [1] and the Asionometry YouTube documentary series on it is great as always [2].
There is no point in doing business and politics and money motivated stuff based on the hypothetical that technology will become self-improving, if that happens we're through the looking glass, not in Kansas anymore, "Roads? Where we're going, we won't need roads." It won't matter or at least it won't be what you think it'll be some crazy thing.
Much, much, much, much more likely is that this is like all the other times we made some real progress, people got too excited, some shady people made some money, and we all sobered up and started working on the next milestone. This is by so far both A) The only scenario you can do anything about and B) The only scenario honest experts take seriously, so it's a double "plan for this one".
The quiet ways that Jetson Orin devices and shit will keep getting smarter and more trustworthy to not break shit and stuff, that's the bigger story, it will make a much bigger difference than snazzy Google that talks back, but it's taking time and appearing in the military first and comes in fits and starts and has all the other properties of ya know, reality.
This articles assumes that a company is like an organism trying to survive. In fact the company is owned by people who want to make money and how may well decide that the easiest way to do that is to make as much money as possible in the existing business and then to shut it down.
Fundamentally this article is reasoning in units of “companies,” but the story is different when reasoning in terms of people.
It turns out automobile companies need way more employees than carriage companies, so the net impact on employment was positive. Then add in all the jobs around automobiles like oil, refining, fueling, repair, road construction, etc.
Do we care if companies put each other out of business via innovation? On the whole, not really. People who study economics largely consider it a positive: “creative destruction.”
The real question of LLM AI is whether it will have a net negative impact on total employment. If so, it would be the first major human technology in history to do that. In the long run I hope it does, because the human population will soon level off. If we want to keep economic growth and standards of living, we will need major advances in productivity.
Let us see how this will age. The current generation of AI models will turn out to be essentially a dead end. I have no doubt that AI will eventually fundamentally change a lot of things, but it will not be large language models [1]. And I think there is no path of gradual improvement, we still need some fundamental new ideas. Integration with external tools will help but not overcome fundamental limitations. Once the hype is over, I think large language models will have a place as simpler and more accessible user interface just like graphical user interfaces displaced a lot of text based interfaces and they will be a powerful tool for language processing that is hard or impossible to do with more traditional tools like statistical analysis and so on.
[1] Large language models may become an important component in whatever comes next, but I think we still need a component that can do proper reasoning and has proper memory not susceptible to hallucinating facts.
> The current generation of AI models will turn out to be essentially a dead end.
It seems a matter of perspective to me whether you call it "dead end" or "stepping stone".
To give some pause before dismissing the current state of the art prematurely:
I would already consider LLM based current systems more "intelligent" than a housecat. And a pets intelligence is enough to have ethical implications, so we arguably reached a very important milestone already.
I would argue that the biggest limitation on current "AI" is that it is architected to not have agency; if you had GPT-3 level intelligence in an easily anthropomorphizeable package (furby-style, capable of emoting/communicating by itself) public outlook might shift drastically without even any real technical progress.
I think the main thing I want from an AI in order to call it intelligent is the ability to reason. I provide an explanation of how long multiplication works and then the AI is capable of multiplying arbitrary large numbers. And - correct me if I am wrong - large language models can not do this. And this despite probably being exposed to a lot of mathematics during training whereas in a strong version of this test I would want nothing related to long multiplication in the training data.
I'm not sure if popular models cheat at this, but if I ask for it (o3-mini) I get correct results/intermediate values (for 794206 * 43124, chosen randomly).
I do suspect this is only achieveable because the model was specifically trained for this.
But the same is true for humans; children can't really "reason themselves" into basic arithmetic-- that's a skill that requires considerable training.
I do concede that this (learning/skill aquisition) is something that humans can do "online" (within days/weeks/months) while LLMs need a separate process for it.
> in a strong version of this test I would want nothing related to long multiplication in the training data.
Is this not a bit of a double standard? I think at least 99/100 humans with minimal previous math exposure would utterly fail this test.
I just tested it with Copilot with two random 45 digit numbers and it gets it correct by translating it into Python and running it in the background. When I asked to not use any external tools, it got the first five, the last two, and a hand full more digits in the middle correct, out of 90. It also fails to calculate the 45 intermediate products - one number times one digit from the other - including multiplying by zero and one.
The models can do surprisingly large numbers correctly, but they essentially memorized them. As you make the numbers longer and longer, the result becomes garbage. If they would actually reason about it, this would not happen, multiplying those long numbers is not really harder than multiplying two digit numbers, just more time consuming and annoying.
And I do not want the model to figure multiplication out on its own, I want to provide it with what teachers tell children until they get to long multiplication. The only thing where I want to push the AI is to do it for much longer numbers, not only two, three, four digits or whatever you do in primary school.
And the difference is not only in online vs offline, large language models have almost certainly been trained on heaps of basic mathematics, but did not learn to multiply. They can explain to you how to do it because they have seen countless explanation and examples, but they can not actually do it themselves.
When kids learn multiplication, they learn it on paper, not just in their heads. LLMs don’t have access to paper.
“Do long arithmetic entirely in your mind” is not a test most humans can pass. Maybe a few savants. This makes me suspect it is not a reliable test of reasoning.
Humans also get a training run every night. As we sleep, our brains are integrating our experiences from the day into our existing minds, so we can learn things from day to day. Kids definitely do not learn long multiplication in just one day. LLMs don’t work like this; they get only one training run and that is when they have to learn everything all at once.
LLMs for sure cannot learn and reason the same way humans do. Does that mean they cannot reason at all? Harder question IMO. You’re right that Python did the math, but the LLM wrote the Python. Maybe that is like their version of “doing it on paper.”
Intelligence alone does not have ethical implications w.r.t. how we treat the intelligent entity. Suffering has ethical implications, but intelligence does not imply suffering. There's no evidence that LLMs can suffer (note that that's less evidence than for, say, crayfish suffering).
I think LLMs are much closer to grasping movement prediction than the cat is to learning english for what its worth.
IMO "ability to communicate" is a somewhat fair proxy for intelligence (even if it does not capture all of an animals capabilities), and current LLMs are clearly superior to any animal in that regard.
>I would already consider LLM based current systems more "intelligent" than a housecat.
An interesting experiment would be to have a robot with an LLM mind and see what things it could figure out, like would it learn to charge itself or something. But personally I don't think they have anywhere near the general intelligence of animals.
It may be that LLM-AI is a dead end on the path to General AI (although I suspect it will instead turn out to be one component). But that doesn't mean that LLMs aren't good for some things. From what I've seen, they represent a huge improvement in (machine) translation, for example. And reportedly they're pretty good at spiffing up human-written text, and maybe even generating text--provided the human is on the lookout for hallucinations (and knows how to watch for that).
You might even say LLMs are good with text in the same way that early automobiles were good for transportation, provided you watched out for the potholes and stream crossings and didn't try to cross the river on the railroad bridge. (DeLoreans are said to be good at that, though :).)
This is a surprising take. I think what's available today can improve productivity by 20% across the board. That seems massive.
Only a very small % of the population is leveraging AI in any meaningful way. But I think today's tools are sufficient for them to do so if they wanted to start and will only get better (even if the LLMs don't, which they will).
Sure, if I ask about things I know nothing about, then I can get something done with little effort. But when I ask about something where I am an expert, then large language models have surprisingly little to offer. And because I am an expert, it becomes apparent how bad they are, which in turn makes me hesitate to use them for things I know nothing about because I am unprepared to judge the quality of the response. As a developer I am an expert on programming and I think I never got something useful out of a large language model beyond pointers to relevant APIs or standards, a very good tool to search through documentation, at least up to the point that it starts hallucinating stuff.
When I wrote dead end, I meant for achieving an AI that can properly reason and knows what it knows and maybe is even able to learn. For finding stuff in heaps of text, large language models are relatively fine and can improve productivity, with the somewhat annoying fact that one has to double check what the model says.
I think that what's available today is a drain on productivity, not an improvement, because it's so unreliable that you have to babysit it constantly to make sure it hasn't fucked up. That is not exactly reassuring as to the future, in my view.
Isn't this entirely missing the point of the article?
> When early automobiles began appearing in the 1890’s — first steam-powered, then electric, then gasoline –most carriage and wagon makers dismissed them. Why wouldn’t they? The first cars were: Loud and unreliable, Expensive and hard to repair, Starved for fuel in a world with no gas stations, Unsuitable for the dirt roads of rural America
That sounds like complaints against today's LLM limitations. It will be interesting to see how your comment ages in 5-10-15 years. You might be technically right that LLMs are a dead end. But the article isn't about LLMs really, it's about the change to an "AI" world from a non-AI world and how the author believes it will be similar to the change from the non-car to the car world.
Sorry but to say current LLMs are a "dead end" is kind of insane if you compare with the previous records at general AI before LLMs. The earlier language models would be happy to be SOTA in 5 random benchmarks (like sentiment or some types of multiple choice questions) and SOTA otherwise consisted of some AIs that could play like 50 Atari games. And out of nowhere we have AI models that can do tasks which are not in training set, pass turing tests, tell jokes, and work out of box on robots. It's literally insane level of progress and even if current techniques don't get to full human-level, it will not have been a dead end in any sense.
I think large language models have essentially zero reasoning capacity. Train a large language model without exposing it to some topic, say mathematics, during training. Now expose the model to mathematics, feed it basic school books and explanations and exercises just like a teacher would teach mathematics to children in school. I think the model would not be able to learn mathematics this way to any meaningful extend.
Current generation of LLMs have very limited ability to learn new skills at inference time. I disagree this means they cannot reason. I think reasoning is by an large a skill which can be taught at training time.
Do you have an example of some reasoning ability any of the large language models has learned? Or do you just mean that you think, we could train them in principle?
But dead end to what? All progress eventually plateaus somewhere? It's clearly insanely useful in practice. And do you think there will be any future AGI whose development is not helped by current LLM technology? Even if the architecture is completely different the ability of LLMs to understand humans data automatically is unparalleled.
You're in a bubble. Anyone who is responsible for making decisions and not just generating text for a living has more trouble seeing what is "insanely useful" about language models.
I don’t think you’re right about that. LLMs are very good for exploring half-formed ideas, (what materials could I look at for x project?), generating small amounts of code when it’s not your main job, and writing boring crap like grant applications.
That last one isn’t useful to society, but it is for the individual.
I know plenty of people using LLMs using for stuff like this, in all sorts of walks of life.
edit (it's late, I'm just being a snark. I don't think researchers whose job is implicitly tied to hype is a good example of a worker increasing their productivity)
To reaching AI that can reason. And sure, as I wrote, large language models might become a relevant component for processing natural language inputs and outputs, but I do not see a path towards large language models becoming able to reason without some fundamentally new ideas. At the moment we try to paper over this deficit by giving large language model access to all kind of external tools like search engines, compilers, theorem provers, and so on.
When LLMs attempt to some novel problems (I'm thinking of pure mathematics here) they can try possible approaches and examine by themselves which approaches are working and not and then come to conclusions. That is enough for me to conclude they are reasoning.
But it doesn't understand. Its just similarity and next likely token search. The trick is that turns out to be useful or pleasing when tuned well enough.
Implementation doesn't matter. In so much as human understanding can be reflected in a text conversation, its distribution can be approximated using a distribution in next token prediction. Hence there exist next token predictors which are indistinguishable from a human over text--and I do not distinguish identical behaviors.
There is some truth to this, but the biggest concerns I have about AI are not related to who will realize the change is coming. They are moral/ethical concerns that transcend any particular market. Things connected to privacy, creativity, authorship, inequality and the like. This means that AI isn't really the cause of these concerns, it's just the current front line of these larger issues, which have persisted across all manner of disruptions across all manner of industry.
This kind of just-so story is easy to write after the fact. It's harder to see the future at the time.
How many people read a version of the same story and pivoted their company to focus on SecondLife, NFTs, blockchain or whatever else technology was hyped at the time and tanked? That's the other half of this story.
You can replicate real life, but it's kind of boring.
- 3D printing
Became a useful industrial tool, but home 3D printing never went mainstream. At one point Office Depot offered 3D printing. No longer.
- Self-driving minibuses
Several startups built these, and some were deployed. Never really caught on. You'd think that airport parking shuttles and such would use these, but they don't.
- Small gas turbines
Power for cars, buses, trucks, backup power, and other things where you need tens to hundreds of kilowatts in a small package. All those things were built and worked. But the technology never became cheap. Aircraft APUs for large aircraft and the US Army's M1 tank variants remain one of the few deployed applications. The frustration of turbine engines is that below bizjet size, smaller units are not much cheaper.
- 3D TV
That got far enough that 3D TV sets were in stores. But they didn't sell.
- Nuclear power
Works, mostly, but isn't really cost-effective. Failures are very expensive and require evacuating sizable areas.
- Proof of correctness for programs
After forty years, it's still a clunky process.
- Maglev trains
Works, but insanely expensive.
- The Segway
Works, but scooters do the same job with less expense.
- 3D input devices
They used to be seen at trade shows, but it turns out that they don't make 3D input easier.
Metaverse (virtual worlds) did catch on - virtual offices and storefronts didn't really catch on, but people enjoy virtual worlds for: competitive and cooperative gaming; virtual fashion and environment construction; chat and social interaction; storytelling; performance; etc. Mostly non-commerce recreation activities. Look at the success of fortnite, minecraft, world of warcraft, etc. These share the dimension of shared recreational experiences and activities that give people a reason to spend time in the virtual world.
We have a system to which I can upload a generic video, and which captures eveeeeeerything in it, from audio, to subtitles onscreen, to skewed text on a mug, to what is going on in a scene. It can reproduce it, reason about it, and produce average-quality essays about it (and good-quality essays if prompted properly), and, still, there are so many people who seem to believe that this won't revolutionize most fields?
The only vaguely plausible and credible argument I can entertain is the one about AI being too expensive or detrimental to the environment, something which I have not looked sufficiently into to know about. Other than that, we are living so far off in the future, much more than I ever imagined in my lifetime! Wherever I go I see processes which can be augmented and improved though the use of these technologies, the surface of which we've only barely scratched!
Billions are being poured trying to use LLMs and GenAI to solve problems, trying to create the appropriate tools that wrap "AI", much like we had to do with all the other fantastic technology we've developed throughout the years. The untapped potential of current-gen models (let alone next-gen) is huge. Sure, a lot of this will result in companies with overpriced, over-engineered, doom-to-fail products, but that does not mean that the technology isn't revolutionary.
From producing music, to (in my mind) being absolutely instrumental in a new generation of education or mental health, or general support for the lonely (elderly and perhaps young?), to the service industry!...the list goes on and on and on. So much of my life is better just with what little we have available now, I can't fathom what it's going to be like in 5 years!
I'm sorry I highjacked your comment, but it boggles the mind how so many people so adamantly refuse to see this, to the point that I often wonder if I've just gone insane?!
People dislike the unreliability and not being able to reason about potential failure scenarios.
Then there's the question whether a highly advanced AI is better at hiding unwanted "features" in your products than you are at finding them.
And lastly, you've went to great lengths of completely air gapping the systems holding your customers' IP. Do you really want some Junior dev vibing that data into the Alibaba cloud? How about aging your CFO by 20 years with a quote on an inference cluster?
I mostly agree with all your points being issues, I just don't see them as roadblocks to the future I mentioned, nor do I find them issues without solutions or workarounds.
Unreliability and difficulty reasoning about potential failure scenarios is tough. I've been going through the rather painful process of taming LLMs to do the things we want them to, and I feel that. However, for each feature, we have been finding what we consider to be rather robust ways of dealing with this issue. The product that exists today would not be possible without LLMs and it is adding immense value. It would not be possible because of (i) a subset of the features themselves, which simply would not be possible; (ii) time to market. We are now offloading the parts of the LLM which would be possible with code to code — after we've reached the market (which we have).
> Then there's the question whether a highly advanced AI is better at hiding unwanted "features" in your products than you are at finding them.
I don't see how this would necessarily happen? I mean, of course I can see problems with prompt injection, or with AIs being lead to do things they shouldn't (I find this to be a huge problem we need to work on). From a coding perspective, I can see the problem with AI producing code that looks right, but isn't exactly. I see all of these, but don't see them as roadblocks — not more than I see human error as roadblocks in many cases where these systems I'm thinking about will be going.
With regards to customers' IP, this seems again more to do with the fact some junior dev is being allowed to do this? Local LLMs exist, and are getting better. And I'm sure we will reach a point where data is at least "theoretically private". Junior devs were sharing around code using pastebin years ago. This is not an LLM problem (though certainly it is exacerbated by the perceived usefulness of LLMs and how tempting it may be to go around company policy and use them).
I'll put this another way: Just the scenario I described, of a system to which I upload a video and ask it to comment on it from multiple angles is unbelievable. Just on the back of that, nothing else, we can create rather amazing products or utilities. How is this not revolutionary?
So would a universal cancer vaccine, but no one is acting like it's just around the corner.
I'm old enough to remember when "big data" and later "deep data" was going to enable us to find insane multi-variable correlations in data and unlock entire new levels of knowledge and efficiency.
AI as currently marketed is just that with an LLM chatbot.
I definitely don't think so. You're seeing companies who have a lot of publicity on the internet. There are tons of very successful SMBs who have no real idea of what to do with AI, and they're not jumping on it at all. They're at risk.
An interesting aspect that doesn't seem captured by TFA and similar articles is that it is not a specific kind of business that is being disrupted, but rather an entire genre of labor on which they all rely to varying extents: knowledge work. Furthermore, "knowledge work" is a very broad term that encompasses an extremely broad variety of skillsets (engineering, HR, sales, legal, medical...) And knowledge workers are indeed being rapidly disrupted by GenAI.
This is an interesting phenomenon that probably has no historical equivalent and hence may not have been fully contemplated in any literature, and so comparisons like TFA fall short of capturing the full implications.
Whether these companies see themselves an AI company seems orthogonal to the fact that they should acknowledge this sea-change and adapt. However, currently all industries seem to be thinking they should be an "AI company" and are responding by trying to stuff AI into any product they can. Maybe the urgency for them to adapt should be based on the degree to which knowledge work is critical to their business.
> Even with evidence staring them in the face, carriage companies still did not pivot, assuming cars were a fad.
I like this quote. But this analogy doesn’t exactly work. Withe this hype cycle, CEOs are getting out and saying that AI will replace humans, not horses. Unlike previous artisans making carriages, the CEOs saying these things have very clear motivations to make you believe the hype.
I think ceos that think this way are a self fulfilling prophecy of doom. If they think of their employees as cogs that can be replaced, they get cogs that can be replaced.
The median CEO salary is in the millions, they do not have to ever worry about money again if they can just stick around for one CEO gig for a couple of years
Granted, people who become CEOs are not likely to think this way
But the fact is that when people have so much money they could retire immediately with no consequences, they are basically impossible for a business to hold accountable outside of actual illegal activity
And let's be real. Often it's difficult to even hold them accountable for actual illegal activity too
>At the extreme end, research shows that 1 in 3 CEOs are fired within 18 months.
And the size of the parachute they get when they're tossed from the plane? I know that there are many small companies with someone in a "CEO" position who might not be hugely compensated, but speaking of CEOs at major corporate ventures here, as is commonly understood when one talks about questions of executive responsibility (or lack thereof), let's be real on some actual severance figure averages for a clearer picture of consequences and "punishment".
If you’re playing at that level, you’re not thinking about subsistence living and never having to work again. You are driven by ego, by winning, by legacy. All three incentivize you to do well if your board consists of non-asshats. You are playing a multi-shot game.
Isn't this good for the CEO? if your employees aren't cogs then what do you do if they leave? the more replaceable they are the better bargaining power you have as a capitalist right
If you have all cogs, the scope of your business is almost always local. You’re running a lawn mowing business or a subway. And I’m not denigrating those businesses just making the point that they’re not the bulk of the economy. If you’re running a serious business part of your business may be cogs but there’s a very important layer of non cogs that you spend most of your time recruiting, keeping, and guiding. These folks are irreplaceable.
Moreover, there was at least one company which did pivot --- the Chevy Malibu station wagon my family owned in the mid-70s had a badge on the door openings:
>Body by Fisher
which had an image of the carriages which they had previously made.
It's an interesting story but a weird analogy and moral. What would have been better if the other 3,999 carriage companies had all tried to make automobiles? Probably about 3,990 shitty cars and a few more mild successes. I'm not sure that's any better.
That's what I see with AI. Every company wants to suddenly "be an AI company", although few are sure what that means. Companies that were legitimately very good at a specific thing are now more interested in being mediocre at the same thing as everyone else. Maybe this will work out in the long tun but right now it's a pain in the ass.
>In each of the three companies that survived, it was the founders, not hired CEOs that drove the transition.
This is how VCs destroy businesses by bring in adult supervision. CEOs are not incentivized to play the long game.
The difference between the mobility & transportation industry, whether it by carriage and horse, or motor car, was that it was in demand by 99% of the population. AI, on the other hand, is only demanded by say 5%-10% of the population. How many people truly want an AI fridge or dishwasher? They just want fresh food and clean dishes.
I wonder if there is something noteworthy about Studebacker - yes, they were the only carriage maker out of 4000 to start making cars, and therefore the CEO "knew better" than the other ones.
But then again, Studebacker was the single largest carriage maker and a military contractor for the Union - in other words they were big and "wealthy" enough to consider the "painful transformation" as the article puts it.
How many of the 3999 companies that didn't acutally had any capacity to do so?
Is it really a lesson in divining the future, or more survivorship bias?
Agreed. The autombile was two innovations, not one. If Ford had created a carriage assembly line in an alternate history without automobiles, how many carriage makers would he have put out of business? The United States certainly couldn't have supported 4000 carriage assembly lines. Most of those carriage makers did not have the capacity or volume to finance and support an assembly line.
Also, the auto built on some technologies that were either invented or refined by the bicycle industry: Pneumatic tires, ball bearings, improved steel alloys, and a gradual move to factory production. Many of the first paved roads were the result of demand from bicyclists.
> He founded Buick in 1904 and in 1908 set up General Motors. ... In 1910 Durant would be fired by his board. Undeterred, Durant founded Chevrolet, took it public and in 1916 did a hostile takeover of GM and fired the board. He got thrown out again by his new board in 1920 and died penniless managing a bowling alley.
I've listened to so many CEOs in various industries (not just tech) salivating at the potential ability to cutout the software engineering middle man to make their ideas come to life (from PMs, to Engineers, to Managers, etc.). They truly believe the AI revolution is going to make them god's gift to the world.
I on the other hand, see the exact opposite happening. AI is going to make people even more useful, with significant productivity gains, in actuality creating MORE WORK for humans and machines alike to do.
Leaders who embrace this approach are going to be the winners. Leaders who continue to follow the hype will be the losers, although there will probably be some scam artists who are winners in the short term who are riding the hype cycle just like crypto.
The shift described in the article is more about craftsmanship vs mass production (Ford's conveyor belt and so on) and disruption is not the right word as it took place over decades. Most people that started as coach builders could probably keep their jobs as fewer and fewer people started.
There were some classes of combustion engines that smaller shops did manufacture, such as big hot-bulb engines for ships and factories. Miniaturised combustion engines or electric motors are not suitable for craftsman-like building but rather standardised procedures with specialised machines.
The main mechanism is not "disruption" but rather a trend of miniaturisation and mass production.
Stepping back from the specifics these are stories of human nature.
We tag “complacency” as bad, but I think it’s just a byproduct of our reliance on heuristics and patterns which is evolutionarily useful overall.
On the other hand we worry (sometimes excessively) about how the future might unfold and really much of that is unknown.
Much more practical (and rewarding) to keep improving oneself or organisation to meet the needs of the world today withe an eye on how the world is evolving, rather than try to be some oracle or predict too far out (in which case you need to both get the prediction and the execution right!).
As an aside, it seems a recent fashion to love these big bets these days (AI, remember Metaverse), and to make big high conviction statements about the future, but that’s more to do with their individual specific circumstances and motivations.
I feel this at a personal level. I started as an Android developer and stayed so. Not venturing into hybrid/etc or even trying to be into iOS as well, let alone backend, full stack (let's not even begin to talk of AI) - while kind of always seeing this might happen. Now I see the world pass by kind of. I don't think it's always missing the future. Maybe a comfort zone thing - institutional or personal? Sometimes it's just vehement refusal to believe something. I think it's just foolish hope against the incoming tidal shift.
This reminds me of Mary Anderson [0], who invented the windshield wiper so early that her patent expired by the time Cadillac made them standard equipment.
I don't know if the problems at the company that I worked for, came from the CEO, or many of the powerful General Managers.
At my company, "General Manager" positions were the ones that actually set much of the planning priorities. Many of them, eventually got promoted to VP, and even, in the case of my former boss, the Chairman of the Board.
When the iPhone came out, one of my employees got one (the first version). I asked to borrow it, and took it to our Marketing department. I said "This is gonna be trouble for us."
I was laughed out of the room. They were following the strategy set down from the General Managers, which involved a lot of sneering at the competition.
The iPhone (and the various Android devices that accompanied it), ate my company for breakfast, and picked their teeth with our ribs.
A couple of the GMs actually anticipated the issues, but they were similarly laughed out of their rooms.
I saw the same thing happen to Kodak (the ones that actually invented digital photography), with an earlier disruption. I was at a conference, hosted by Kodak, and talked to a bunch of their digital engineers and Marketing folks.
They all had the same story: They were being deliberately kneecapped by the film people (with the direct support of the C-Suite).
At that time, I knew they were "Dead Man Walking." That was in 1996 or so.
There was an excellent thread(s? I think) about Nokia around these parts a few months back that covered this in detail by various commentators (perhaps you were one of them).
Wish I'd bookmarked them; some great reading in those
"We're all in on Blockchain! We're all in on VR! We're all in on self-driving! We're all in on NoSQL! We're all in on 3D printing!" The Gardner Hype Cycle is alive and well.
Enjoyed the history, but don't get the premise. Has any tech been watched more closely or adopted faster by incumbents?
> The first cars were expensive, unreliable, and slow
We can say the same about the AI features being added to every SaaS product right now. Productization will take a while, but people will figure out where LLMs add value soon enough.
For the most part, winning startups look like new categories rather than those beating an incumbent. Very different than SaaS winners.
Interestingly, my grandfather worked as a mechanic at a family-owned Chrysler car dealership for 30 years that previously sold carriages. It's in their logo and they have one on the roof.
>- Starved for fuel in a world with no gas stations
Actually, gasoline was readily available in its rôle as fuel for farm and other equipment, and as a bottled cleaning product sold at drug stores and the like.
>- Unsuitable for the dirt roads of rural America
but the process of improving roads for the new-fangled bicycle was well underway.
Linux won on cost once it was "good enough". AI isn't free (by any definition of free) and is a long way away from "good enough" to be a general replacement for the status quo in a lot of domains.
The areas where it does make sense to use, it's been in use for years, if not longer, without anyone screaming from the rooftops about it.
By the time Linux won it was better - by 2003 you could take a workload that took eight hours on some ridiculous Sun machine and run it in 40 minutes on a Xeon box.
Thing is, those companies can't do much if whole lines of business will become obsolete. Behind every company, there is a core competence that forms the value and the rest of the business is just a wrapper. When core competence is worthless, the company is just out. Even if they know it's coming there's little they can do. In fact, best thing they can actually do is turning the company into a milk cow to extract all value they can here and now, stopping all investments into future - will probably generate enormous profits for a few years. Extract them and invest into the wider stock market.
This kind of article has to be a subgenre of business writing.
Why didn't all the carriage makers (400+) become Ford, General Motors and Chrysler?
Why didn't hundreds of catalogue sales companies become Amazon?
Why didn't hundreds of local city taxi services become Uber and Lyfe.
Hint: there's hundreds on one side of these questions and a handful on the other.
Beyond the point that a future market doesn't necessary have space for present players, the "Oo, look how foolish, they missed the next wave" articles miss the point that present businesses exist to make money in the present and generally do so. If you're horseshoe maker, you may know your days are numbered but you have equipment and you're making money. Liquidating to jump into this next wave may not make any sense - make your product 'till demand stops and retire. Don't reinvest but maybe raise prices and extract all you can from the operation now. Basically, "failed to pivot" applies to startups that don't have a capital investment and income stream with a given technology. If you have those, speculative pivoting is ignoring your fiduciary duty to protect that stuff while it's making making even if the income stream is declining.
And sure, I couldn't even get to the part about AI this offended most economist part so much...
Yes, would have been a much better article if it told us how to be sure AI is the next automobile and that AI is not the next augmented reality, metaverse, blockchain, Segway, or fill-in-your-favorite-fad.
HN (not YC, who readily invest in blockchain companies) are usually about a decade out regarding blockchain knowledge. Paying 2-6% of all your transactions to intermediaries of varying value-add may seem sensible to you. That's fine.
Merchants aren't the customer target for credit cards, consumers are. Credit card payments are reversible and provide a reward. There are lots of options available that are better for merchants than credit cards (cash, debit cards, transfers, etc). But they all lose because the consumer prefers credit cards.
Cash isn't really great for merchants. You have to handle it, safeguard it, count it, get it to the bank. Many hands are involved that process and theft or loss can occur by any of them or by robbery/burglary. I don't know if it's a break-even on payment card fees but I bet it is close.
Yes, that's the varying value-add mentioned in the comment you're replying to. I pay 3.5% of every card transaction to Square. I don't get 3.5% cash/rewards back.
Do you get a discount for paying with cash (or blockchain)? In general the answer is no, meaning you aren't paying the 3.5% transaction fee, the merchant is.
Ironic to read this on a site that's unusable on mobile.
I like Steve's content, but the ending misses the mark.
With the carriage / car situation, individual transportation is their core business, and most companies are not in the field of Artificial Intelligence.
I say this as someone who has worked for 7 years implementing AI research for production, from automated hardware testing to accessibility for nonverbals: I don't think founders need to obsess even more than they do now about implementing AI, especially in the front end.
This AI hype cycle is missing the mark by building ChatGPT-like bots and buttons with sparkles that perform single OpenAI API calls. AI applications are not a new thing, they have always been here, now they are just more accessible.
The best AI applications are beneath the surface to empower users, Jeff Bezos says that (in 2016!)[1]. You don't see AI as a chatbot in Amazon, you see it for "demand forecasting, product search ranking, product and deals recommendations, merchandising placements, fraud detection, translations."
[1]: https://www.aboutamazon.com/news/company-news/2016-letter-to...
"With the carriage / car situation, individual transportation is their core business, and most companies are not in the field of Artificial Intelligence."
I'm missing something here. First, I thought Steve's point was that the carriage makers did not see "individual transportation" as their business, and they should have--if they had, they might have pivoted like Studebaker did.
So if "most companies are not in the field of Artificial Intelligence", that could mean that they ought to be.
However, I draw a somewhat different conclusion: the business that companies ranging from Newsweek to accountants to universities to companies' HR departments should see themselves in is intelligence, regardless of whether that's artificial or otherwise. The question then becomes which supplies that intelligence better: humans or LLM-type AI (or some combination thereof)? I'm not at all sure that the answer at present is LLM-AI, but it is a different question, and the answer may well be different in the near future.
There are of course other kinds of AI, as you (jampa) mention. In other words, AI is not (for now) one thing; LLMs are just one kind of AI.
Commercial endeavors exist to provide goods and services to consumer and users.
The implication of the author here is that those providing services that continue using human resources rather than AI, are potentially acting like carriage manufacturers.
Of course that assumes improvements in technology, which is not guaranteed.
First, I thought Steve's point was that the carriage makers did not see "individual transportation" as their business, and they should have--if they had, they might have pivoted like Studebaker did
But all 400+ carriage maker had pivoted, would they have had a chance to survive very long? Would they have all made more money pivoting? The idea that all this is only a "lack of vision" rather than hard business choices is kind of annoying.
This. Carmaking is not viable on small scale the way carriage making is. If they all pivoted, perhaps 10 instead of 1 would have survived through 1929, the fate of all others would be the same - except staying carriage makers till the end they at least continued to extract profits, trying to all become carmakers they'd waste that money into retooling and retraining and whatnot and never made it back.
This is a different way of saying, people must learn how to use a new technology. I think like cars, radio, internet or smart phones. It took a while for people to understand somethings are so disruptive, eventually it will find a way into your life in all forms.
Im guessing for someone in laundry or restaurant business it might be hard to understand how AI could change their lives. And that is true, at least at this stage in the adoption and development of AI. But eventually it will find a way into their business in some form or the other.
There are stages to this. Pretty sure the first jobs to go will be the most easiest. This is the case with Software development too. When people say writing code has gotten easier, they really are talking about projects that were already easy to build getting even more easier. Harder parts of software development are still hard. Making changes to larger code bases with a huge user base comes with problems where writing code is kind of irrelevant. There are bigger issue to address like regression, testing, stability, quality, user adoption etc etc.
Second stage is of course once the easy stuff gets too easy to build. There is little incentive to build it. With modern building techniques we aren't building infinite huts, are we? We pivoted to building sky scrapers. I do believe most of AI's automation gains will be soaked up in the first wave and there will little incentive to build easy stuff and harder stuff will have more productivity demands from people than ever before.
I can strain the analogy just enough to get something useful from it.
If we laboriously create software shops in the classical way, and suddenly a new shop appears that is buggy, noisy, etc but eventually outperforms all other shops, then the progenitors of those new shops are going to succeed while the progenitors of these old shops are not going to make it.
It's a strain. The problem is AI is a new tech that replaces an entire process, not a product. Only when the process is the product (eg the process of moving people) does the analogy even come close to working.
I'd like to see analysis of what happened to the employees, blacksmiths, machinists, etc. Surely there are transferrable skills and many went on to work on automobiles?
This SE q implies there was some transition rather than chaos.
https://history.stackexchange.com/questions/46866/did-any-ca...
Stretching just a bit further, there might be a grain of truth to the "craftsman to assembly line worker" when AI becomes a much more mechanical way to produce, vs employing opinionated experts.
I agree as I point out in other comments here - you said it with more detail.
AGI + robot is way beyond a mere change in product conception or implementation. It's beyond craftsmen v. modern forms of manufacturing we sometimes read about with guns.
It is a strain indeed to get from cars v.buggies to AGI. I dare say that without AGI as part and parcel to AI the internalization of AI must be necessarily quite different.
> With the carriage / car situation, individual transportation is their core business, and most companies are not in the field of Artificial Intelligence.
Agreed. The analogy breaks down because the car disrupted a single vertical but AI is a horizontal, general-purpose technology.
I think this also explains why we're seeing "forced" adoption everywhere (e.g., the ubiquitous chatbot) -- as a result of:
1. Massive dose of FOMO from leadership terrified of falling behind
2. A fundamental lack of core competency. Many of these companies companies (I'm talking more than just tech) can't quickly and meaningfully integrate AI, so they just bolt on a product
3. Layoffs in all but name, mainly in response to a changing tax environment. See also: RTO.
There's a qualitative difference between ok transport and better transport vs AI.
If we're going to talk cars, I think what the Japanese did to the big three in the 1980s would have been far more on point.
AI is encumbered by AGI which is further encumbered by the delta between what is claimed possible (around the corner) and what is. That's a whole different ball game with wildly different risk/reward tradeoffs.
Learning about history post buggies didn't do much for me.
Mobility is not an analogy for AI, it's an analogy to whichever industry you work in. If you publish a magazine, you may think you're in the 'publishing' business and that AI as a weak competitor, maybe capable of squashing crappy blogs but not prestigious media like yours. But maybe what you're really in is the 'content' business, and you need to recognize that sooner or later, AI is going to beat you at the content game even if it couldn't beat you at the publishing game. The kicker being that there no longer exists a publishing game, because AI.
Or more likely, you are in the publishing business but the tech world unilaterally deemed everything creative to be a fungible commodity and undertook a multi-billion dollar campaign to ingest actual creative content and compete with everyone that creates it in the same market with cheap knockoffs. Our society predictably considers this progress because nothing that could potentially make that much money could possibly be problematic. We continue in the trend of thinking small amounts of good things are not as good as giant piles of crap if the crap can be made more cheaply.
Viewed from a different angle I think he's probably close. A service provider changing the back end while leaving the front end UI similar is not dissimilar to early cars being built like carriages. But when the product can shift from "give me an app that makes it easier to do my taxes" to "keep me current on my taxes and send me status updates" that's a pretty radical difference in what the customer sees.
> But when the product can shift from "give me an app that makes it easier to do my taxes" to "keep me current on my taxes and send me status updates" that's a pretty radical difference in what the customer sees.
For a bunch of stuff - banks, online shopping, booking a taxi, etc - this shift already happened with non-LLM-based "send me notifications of unusual account activity" or even the dead-simple "send me an email about every transaction on my bank account." Phone notifications moved it from email to built-into-the-OS even.
The "LLM hype cycle" tweak becomes something like "have an LLM summarize the email instead of just listing the three transactions" which is of dubious use to the average user.
No the shift hasn't happened yet at all. Let's take those examples one by one.
Banks: Normal retail customers are responsible for managing their account balances, importing transaction data into whatever book keeping system, downloading their tax forms for filing, adjusting their services and strategy based on whatever they're planning to do in their life etc. Private banking is a reasonable model for the service that everyone should get, but can't because it's too expensive.
Online shopping: Most people have to figure out what they're looking for, research the options, figure out where to order from, keep track of warranties, repairs, returns, recalls, maintenance, consumables, etc. Personal assistants can absorb most of that, but that's expensive.
Booking a taxi: On the same theme, for all the scheduled travel that should be booked and ready to go based on your calendar. Personal assistants can do this too, but again it's expensive.
The core ideas of giving the service provider context, guidance, and autonomy to work without regular intervention are not unique to automation but only recently is there a conceivable path to building software that can actually deliver.
> The best AI applications are beneath the surface to empower users
Not this time, tho. ChatGPT is the iphone moment for "AI" for the masses. And it was surprising and unexpected both for the experts / practitioners and said masses. Working with LLMs pre gpt3.5 was a mess, hackish and "in the background" but way way worse experience overall. Chatgpt made it happen just like the proverbial "you had me at scroll and pinch-to-zoom" moment in the iphone presentation.
The fact that we went from that 3.5 to whatever claude code thing you can use today is mental as well. And one of the main reasons we got here so fast is also "chatgpt-like bots and buttons with sparkles". The open-source community is ~6mo behind big lab SotA, and that's simply insane. I would not have predicted that 2 years ago, and I was deploying open-source LLMs (GPT-J was the first one I used live in a project) before chatgpt launched. It is insane!
You'll probably laugh at this, but a lot of fine-tuning experimentation and gains in the open source world (hell, maybe even at the big labs, but we'll never know) is from the "horny people" using local llms for erotica and stuff. I wouldn't dismiss anything that happens in this space. Having discovered the Internet in the 90s, and been there for every hype cycle in this space, this one is different, no matter how much anti-hype tokens get spent on this subject.
I’ll spend an anti-hype token :)
ChatGPT wasn’t the iphone moment, because the iphone wasn’t quickly forgotten.
Outside of software, most adult professionals in my network had a play with chatgpt and have long since abandoned their accounts. They can’t use chatbots for work (maybe data is sensitive, or their ‘knowledge work’ isn’t the kind that produces text output). Our native language is too poorly supported for life admin (no Gemini summaries or ‘help writing an email’). They just don’t have any obvious use case for LLMs in their life.
It’s tough because every CEO and VC is hyperventilating about LLMs as a paradigm shift for humanity when in reality they are useful but also so are gene editing and solid state batteries and mrna vaccines. It’s just that software innovations are much more attractive to certain groups with money.
"It’s tough because every CEO and VC [on LinkedIn and CNBC] is hyperventilating about LLMs as a paradigm shift for humanity"
I guess there's a quiet majority thing going on where the vast majority of businesses are just not integrating chatbots because their business is not generating text.
Not only that, there is active backlash for talking about ChatGPT in social circles now. Where as, I guess March 2023-ish it was the topic of conversation. Then when something new dropped it came up again and most people had used it and had an interesting story mainly about asking it for some sort of advice. Now when someone mentions it or tries to show you something it's mostly an eye roll and to the non-tech general user it hasn't made any major improvement since mid 2023. Most people I know are in fact complaining about the amount of crappy AI content and are actively opposed to it.
ChatGPT has between 800 million and 1 billion weekly users.
>>Outside of software, most adult professionals in my network had a play with chatgpt and have long since abandoned their accounts.
I know an architect, after much encouraging her to use it. She said ChatGPT most of the times would make bedroom window into a rest room. Its kind of hilarious because guessing the next word, and spatial thinking seem to be very different beasts altogether. And in some way might be two different tracks of intelligence. Like two different types of AGI.
A picture is better than thousand words - A saying.
My guess is a picture is better than a infinite words. How do you explain something as it exists, you can use as many words, phrases, metaphors and similes. But really is it possible to describe something in words and have two different people, or even a computer program not imagine it very differently?
Another way of looking at this is language itself might be several layers below intelligence. If you see you can go close but never accurate describe what you are thinking. If that is the case we are truly cooked and might never have AGI itself as there is only that far you can represent something you don't understand by guessing.
It may be true but Bezos' comment is also classic smoke blowing. "Oh well you can't see us using <newest hype machine> or quantify it's success but it's certainly in everything we do!"
But it’s completely true — Amazon undoubtedly has a pretty advanced logistics set up and certainly uses AI all over the place. Even if they’re not a big AI researcher.
There are a lot of great use cases for ML outside of chatbots
It's not "generative AI" which is what most people mean when they say "AI" today, outside of "old school" AI/ML folks.
So at best technically correct on his part but still semantically incorrect
> There are a lot of great use cases for ML outside of chatbots
To be slightly provocative, most of the ML applications that are profitable are not chatbots.
To stay on Amazon, their product recommendations, ads ranking, and search likely make Amazon way more than their little AI summaries or Rufus chatbot.
But also, like, how much of that is really "AI" in the general sense as it applies to things like ChatGPT today? Do you really need a massive resource intensive system for products recommendations and things related to Amazon's marketing.
Just today I used the AI service on the amazon product page to get more information about a specific product, basically RAG on the reviews.
So maybe your analysis is outdated?
An Amazon AI chatbot is also the only way to request a refund after you haven't received your packet.
The amazon store chatbot is mongst the worst implementations I've seen. The old UI which displayed the customer questions and allowed searching them was infinitely better.
FWIW, the old UI (which I agree is better) is still available. Once the "AI search" is done, there's a dropdown you can click and it will show all the reviews that include the word you searched.
I think you two are talking about different things: the product review summary and the chatbot.
Are you seriously suggesting the crappy AI bot on Amazon product pages is evidence of an ‘AI’ revolution? The thing sucks. If I’m ready to spend money on a product, it’s worth my time to do a traditional keyword search and quickly scroll through the search returns to get the contextualized information, rather then hoping an LLM will get it right.
Right. The point is that in frothy market conditions and a general low-integrity regime in business and politics there is a ton of incentive to exploit FOMO far beyond it's already "that's a stiff sip there" potency and this leads to otherwise sane and honest people getting caught up into doing concrete things today based on total speculation about technology that isn't even proposed yet. A good way to really understand this intuitively is to take the present-day intellectual and emotional charge out of it without loss of generality: we can go back and look at Moore's Law for example, and the history of how the sausage got made on reconciling a prediction of exponential growth with the realities of technological advance. It's a fascinating history, there's at least one great book [1] and the Asionometry YouTube documentary series on it is great as always [2].
There is no point in doing business and politics and money motivated stuff based on the hypothetical that technology will become self-improving, if that happens we're through the looking glass, not in Kansas anymore, "Roads? Where we're going, we won't need roads." It won't matter or at least it won't be what you think it'll be some crazy thing.
Much, much, much, much more likely is that this is like all the other times we made some real progress, people got too excited, some shady people made some money, and we all sobered up and started working on the next milestone. This is by so far both A) The only scenario you can do anything about and B) The only scenario honest experts take seriously, so it's a double "plan for this one".
The quiet ways that Jetson Orin devices and shit will keep getting smarter and more trustworthy to not break shit and stuff, that's the bigger story, it will make a much bigger difference than snazzy Google that talks back, but it's taking time and appearing in the military first and comes in fits and starts and has all the other properties of ya know, reality.
[1] https://www.amazon.com/Moores-Law-Silicon-Valleys-Revolution...
[2] https://www.youtube.com/@Asianometry
This articles assumes that a company is like an organism trying to survive. In fact the company is owned by people who want to make money and how may well decide that the easiest way to do that is to make as much money as possible in the existing business and then to shut it down.
Fundamentally this article is reasoning in units of “companies,” but the story is different when reasoning in terms of people.
It turns out automobile companies need way more employees than carriage companies, so the net impact on employment was positive. Then add in all the jobs around automobiles like oil, refining, fueling, repair, road construction, etc.
Do we care if companies put each other out of business via innovation? On the whole, not really. People who study economics largely consider it a positive: “creative destruction.”
The real question of LLM AI is whether it will have a net negative impact on total employment. If so, it would be the first major human technology in history to do that. In the long run I hope it does, because the human population will soon level off. If we want to keep economic growth and standards of living, we will need major advances in productivity.
Let us see how this will age. The current generation of AI models will turn out to be essentially a dead end. I have no doubt that AI will eventually fundamentally change a lot of things, but it will not be large language models [1]. And I think there is no path of gradual improvement, we still need some fundamental new ideas. Integration with external tools will help but not overcome fundamental limitations. Once the hype is over, I think large language models will have a place as simpler and more accessible user interface just like graphical user interfaces displaced a lot of text based interfaces and they will be a powerful tool for language processing that is hard or impossible to do with more traditional tools like statistical analysis and so on.
[1] Large language models may become an important component in whatever comes next, but I think we still need a component that can do proper reasoning and has proper memory not susceptible to hallucinating facts.
> The current generation of AI models will turn out to be essentially a dead end.
It seems a matter of perspective to me whether you call it "dead end" or "stepping stone".
To give some pause before dismissing the current state of the art prematurely:
I would already consider LLM based current systems more "intelligent" than a housecat. And a pets intelligence is enough to have ethical implications, so we arguably reached a very important milestone already.
I would argue that the biggest limitation on current "AI" is that it is architected to not have agency; if you had GPT-3 level intelligence in an easily anthropomorphizeable package (furby-style, capable of emoting/communicating by itself) public outlook might shift drastically without even any real technical progress.
I think the main thing I want from an AI in order to call it intelligent is the ability to reason. I provide an explanation of how long multiplication works and then the AI is capable of multiplying arbitrary large numbers. And - correct me if I am wrong - large language models can not do this. And this despite probably being exposed to a lot of mathematics during training whereas in a strong version of this test I would want nothing related to long multiplication in the training data.
I'm not sure if popular models cheat at this, but if I ask for it (o3-mini) I get correct results/intermediate values (for 794206 * 43124, chosen randomly).
I do suspect this is only achieveable because the model was specifically trained for this.
But the same is true for humans; children can't really "reason themselves" into basic arithmetic-- that's a skill that requires considerable training.
I do concede that this (learning/skill aquisition) is something that humans can do "online" (within days/weeks/months) while LLMs need a separate process for it.
> in a strong version of this test I would want nothing related to long multiplication in the training data.
Is this not a bit of a double standard? I think at least 99/100 humans with minimal previous math exposure would utterly fail this test.
I just tested it with Copilot with two random 45 digit numbers and it gets it correct by translating it into Python and running it in the background. When I asked to not use any external tools, it got the first five, the last two, and a hand full more digits in the middle correct, out of 90. It also fails to calculate the 45 intermediate products - one number times one digit from the other - including multiplying by zero and one.
The models can do surprisingly large numbers correctly, but they essentially memorized them. As you make the numbers longer and longer, the result becomes garbage. If they would actually reason about it, this would not happen, multiplying those long numbers is not really harder than multiplying two digit numbers, just more time consuming and annoying.
And I do not want the model to figure multiplication out on its own, I want to provide it with what teachers tell children until they get to long multiplication. The only thing where I want to push the AI is to do it for much longer numbers, not only two, three, four digits or whatever you do in primary school.
And the difference is not only in online vs offline, large language models have almost certainly been trained on heaps of basic mathematics, but did not learn to multiply. They can explain to you how to do it because they have seen countless explanation and examples, but they can not actually do it themselves.
When kids learn multiplication, they learn it on paper, not just in their heads. LLMs don’t have access to paper.
“Do long arithmetic entirely in your mind” is not a test most humans can pass. Maybe a few savants. This makes me suspect it is not a reliable test of reasoning.
Humans also get a training run every night. As we sleep, our brains are integrating our experiences from the day into our existing minds, so we can learn things from day to day. Kids definitely do not learn long multiplication in just one day. LLMs don’t work like this; they get only one training run and that is when they have to learn everything all at once.
LLMs for sure cannot learn and reason the same way humans do. Does that mean they cannot reason at all? Harder question IMO. You’re right that Python did the math, but the LLM wrote the Python. Maybe that is like their version of “doing it on paper.”
Intelligence alone does not have ethical implications w.r.t. how we treat the intelligent entity. Suffering has ethical implications, but intelligence does not imply suffering. There's no evidence that LLMs can suffer (note that that's less evidence than for, say, crayfish suffering).
If you asked your cat to make a REST API call I suppose it would fail, but the same applies if you asked a chatbot to predict realtime prey behavior.
I think LLMs are much closer to grasping movement prediction than the cat is to learning english for what its worth.
IMO "ability to communicate" is a somewhat fair proxy for intelligence (even if it does not capture all of an animals capabilities), and current LLMs are clearly superior to any animal in that regard.
>I would already consider LLM based current systems more "intelligent" than a housecat.
An interesting experiment would be to have a robot with an LLM mind and see what things it could figure out, like would it learn to charge itself or something. But personally I don't think they have anywhere near the general intelligence of animals.
It may be that LLM-AI is a dead end on the path to General AI (although I suspect it will instead turn out to be one component). But that doesn't mean that LLMs aren't good for some things. From what I've seen, they represent a huge improvement in (machine) translation, for example. And reportedly they're pretty good at spiffing up human-written text, and maybe even generating text--provided the human is on the lookout for hallucinations (and knows how to watch for that).
You might even say LLMs are good with text in the same way that early automobiles were good for transportation, provided you watched out for the potholes and stream crossings and didn't try to cross the river on the railroad bridge. (DeLoreans are said to be good at that, though :).)
This is a surprising take. I think what's available today can improve productivity by 20% across the board. That seems massive.
Only a very small % of the population is leveraging AI in any meaningful way. But I think today's tools are sufficient for them to do so if they wanted to start and will only get better (even if the LLMs don't, which they will).
Sure, if I ask about things I know nothing about, then I can get something done with little effort. But when I ask about something where I am an expert, then large language models have surprisingly little to offer. And because I am an expert, it becomes apparent how bad they are, which in turn makes me hesitate to use them for things I know nothing about because I am unprepared to judge the quality of the response. As a developer I am an expert on programming and I think I never got something useful out of a large language model beyond pointers to relevant APIs or standards, a very good tool to search through documentation, at least up to the point that it starts hallucinating stuff.
When I wrote dead end, I meant for achieving an AI that can properly reason and knows what it knows and maybe is even able to learn. For finding stuff in heaps of text, large language models are relatively fine and can improve productivity, with the somewhat annoying fact that one has to double check what the model says.
I think that what's available today is a drain on productivity, not an improvement, because it's so unreliable that you have to babysit it constantly to make sure it hasn't fucked up. That is not exactly reassuring as to the future, in my view.
Isn't this entirely missing the point of the article?
> When early automobiles began appearing in the 1890’s — first steam-powered, then electric, then gasoline –most carriage and wagon makers dismissed them. Why wouldn’t they? The first cars were: Loud and unreliable, Expensive and hard to repair, Starved for fuel in a world with no gas stations, Unsuitable for the dirt roads of rural America
That sounds like complaints against today's LLM limitations. It will be interesting to see how your comment ages in 5-10-15 years. You might be technically right that LLMs are a dead end. But the article isn't about LLMs really, it's about the change to an "AI" world from a non-AI world and how the author believes it will be similar to the change from the non-car to the car world.
Sorry but to say current LLMs are a "dead end" is kind of insane if you compare with the previous records at general AI before LLMs. The earlier language models would be happy to be SOTA in 5 random benchmarks (like sentiment or some types of multiple choice questions) and SOTA otherwise consisted of some AIs that could play like 50 Atari games. And out of nowhere we have AI models that can do tasks which are not in training set, pass turing tests, tell jokes, and work out of box on robots. It's literally insane level of progress and even if current techniques don't get to full human-level, it will not have been a dead end in any sense.
I think large language models have essentially zero reasoning capacity. Train a large language model without exposing it to some topic, say mathematics, during training. Now expose the model to mathematics, feed it basic school books and explanations and exercises just like a teacher would teach mathematics to children in school. I think the model would not be able to learn mathematics this way to any meaningful extend.
Current generation of LLMs have very limited ability to learn new skills at inference time. I disagree this means they cannot reason. I think reasoning is by an large a skill which can be taught at training time.
Do you have an example of some reasoning ability any of the large language models has learned? Or do you just mean that you think, we could train them in principle?
See my other answer.
Something can be much better than before but still be a dead end. Literally a dead end road can take you closer but never get you there.
But dead end to what? All progress eventually plateaus somewhere? It's clearly insanely useful in practice. And do you think there will be any future AGI whose development is not helped by current LLM technology? Even if the architecture is completely different the ability of LLMs to understand humans data automatically is unparalleled.
You're in a bubble. Anyone who is responsible for making decisions and not just generating text for a living has more trouble seeing what is "insanely useful" about language models.
I don’t think you’re right about that. LLMs are very good for exploring half-formed ideas, (what materials could I look at for x project?), generating small amounts of code when it’s not your main job, and writing boring crap like grant applications.
That last one isn’t useful to society, but it is for the individual.
I know plenty of people using LLMs using for stuff like this, in all sorts of walks of life.
Anthropic and OpenAI researchers themselves certainly use AI--do you think they generate text for a living.
What do they use it for?
edit (it's late, I'm just being a snark. I don't think researchers whose job is implicitly tied to hype is a good example of a worker increasing their productivity)
To reaching AI that can reason. And sure, as I wrote, large language models might become a relevant component for processing natural language inputs and outputs, but I do not see a path towards large language models becoming able to reason without some fundamentally new ideas. At the moment we try to paper over this deficit by giving large language model access to all kind of external tools like search engines, compilers, theorem provers, and so on.
When LLMs attempt to some novel problems (I'm thinking of pure mathematics here) they can try possible approaches and examine by themselves which approaches are working and not and then come to conclusions. That is enough for me to conclude they are reasoning.
> the ability of LLMs to understand
But it doesn't understand. Its just similarity and next likely token search. The trick is that turns out to be useful or pleasing when tuned well enough.
Implementation doesn't matter. In so much as human understanding can be reflected in a text conversation, its distribution can be approximated using a distribution in next token prediction. Hence there exist next token predictors which are indistinguishable from a human over text--and I do not distinguish identical behaviors.
There is some truth to this, but the biggest concerns I have about AI are not related to who will realize the change is coming. They are moral/ethical concerns that transcend any particular market. Things connected to privacy, creativity, authorship, inequality and the like. This means that AI isn't really the cause of these concerns, it's just the current front line of these larger issues, which have persisted across all manner of disruptions across all manner of industry.
This kind of just-so story is easy to write after the fact. It's harder to see the future at the time.
How many people read a version of the same story and pivoted their company to focus on SecondLife, NFTs, blockchain or whatever else technology was hyped at the time and tanked? That's the other half of this story.
Ideas that worked but didn't catch on:
- Virtual worlds / metaverses
You can replicate real life, but it's kind of boring.
- 3D printing
Became a useful industrial tool, but home 3D printing never went mainstream. At one point Office Depot offered 3D printing. No longer.
- Self-driving minibuses
Several startups built these, and some were deployed. Never really caught on. You'd think that airport parking shuttles and such would use these, but they don't.
- Small gas turbines
Power for cars, buses, trucks, backup power, and other things where you need tens to hundreds of kilowatts in a small package. All those things were built and worked. But the technology never became cheap. Aircraft APUs for large aircraft and the US Army's M1 tank variants remain one of the few deployed applications. The frustration of turbine engines is that below bizjet size, smaller units are not much cheaper.
- 3D TV
That got far enough that 3D TV sets were in stores. But they didn't sell.
- Nuclear power
Works, mostly, but isn't really cost-effective. Failures are very expensive and require evacuating sizable areas.
- Proof of correctness for programs
After forty years, it's still a clunky process.
- Maglev trains
Works, but insanely expensive.
- The Segway
Works, but scooters do the same job with less expense.
- 3D input devices
They used to be seen at trade shows, but it turns out that they don't make 3D input easier.
It's quite possible to guess wrong.
Metaverse (virtual worlds) did catch on - virtual offices and storefronts didn't really catch on, but people enjoy virtual worlds for: competitive and cooperative gaming; virtual fashion and environment construction; chat and social interaction; storytelling; performance; etc. Mostly non-commerce recreation activities. Look at the success of fortnite, minecraft, world of warcraft, etc. These share the dimension of shared recreational experiences and activities that give people a reason to spend time in the virtual world.
I like the historical part of this article, but the current problem is the reverse.
Everyone is jumping on the AI train and forgetting the fundamentals.
AI will plausibly disrupt everything
We have a system to which I can upload a generic video, and which captures eveeeeeerything in it, from audio, to subtitles onscreen, to skewed text on a mug, to what is going on in a scene. It can reproduce it, reason about it, and produce average-quality essays about it (and good-quality essays if prompted properly), and, still, there are so many people who seem to believe that this won't revolutionize most fields?
The only vaguely plausible and credible argument I can entertain is the one about AI being too expensive or detrimental to the environment, something which I have not looked sufficiently into to know about. Other than that, we are living so far off in the future, much more than I ever imagined in my lifetime! Wherever I go I see processes which can be augmented and improved though the use of these technologies, the surface of which we've only barely scratched!
Billions are being poured trying to use LLMs and GenAI to solve problems, trying to create the appropriate tools that wrap "AI", much like we had to do with all the other fantastic technology we've developed throughout the years. The untapped potential of current-gen models (let alone next-gen) is huge. Sure, a lot of this will result in companies with overpriced, over-engineered, doom-to-fail products, but that does not mean that the technology isn't revolutionary.
From producing music, to (in my mind) being absolutely instrumental in a new generation of education or mental health, or general support for the lonely (elderly and perhaps young?), to the service industry!...the list goes on and on and on. So much of my life is better just with what little we have available now, I can't fathom what it's going to be like in 5 years!
I'm sorry I highjacked your comment, but it boggles the mind how so many people so adamantly refuse to see this, to the point that I often wonder if I've just gone insane?!
People dislike the unreliability and not being able to reason about potential failure scenarios.
Then there's the question whether a highly advanced AI is better at hiding unwanted "features" in your products than you are at finding them.
And lastly, you've went to great lengths of completely air gapping the systems holding your customers' IP. Do you really want some Junior dev vibing that data into the Alibaba cloud? How about aging your CFO by 20 years with a quote on an inference cluster?
I mostly agree with all your points being issues, I just don't see them as roadblocks to the future I mentioned, nor do I find them issues without solutions or workarounds.
Unreliability and difficulty reasoning about potential failure scenarios is tough. I've been going through the rather painful process of taming LLMs to do the things we want them to, and I feel that. However, for each feature, we have been finding what we consider to be rather robust ways of dealing with this issue. The product that exists today would not be possible without LLMs and it is adding immense value. It would not be possible because of (i) a subset of the features themselves, which simply would not be possible; (ii) time to market. We are now offloading the parts of the LLM which would be possible with code to code — after we've reached the market (which we have).
> Then there's the question whether a highly advanced AI is better at hiding unwanted "features" in your products than you are at finding them.
I don't see how this would necessarily happen? I mean, of course I can see problems with prompt injection, or with AIs being lead to do things they shouldn't (I find this to be a huge problem we need to work on). From a coding perspective, I can see the problem with AI producing code that looks right, but isn't exactly. I see all of these, but don't see them as roadblocks — not more than I see human error as roadblocks in many cases where these systems I'm thinking about will be going.
With regards to customers' IP, this seems again more to do with the fact some junior dev is being allowed to do this? Local LLMs exist, and are getting better. And I'm sure we will reach a point where data is at least "theoretically private". Junior devs were sharing around code using pastebin years ago. This is not an LLM problem (though certainly it is exacerbated by the perceived usefulness of LLMs and how tempting it may be to go around company policy and use them).
I'll put this another way: Just the scenario I described, of a system to which I upload a video and ask it to comment on it from multiple angles is unbelievable. Just on the back of that, nothing else, we can create rather amazing products or utilities. How is this not revolutionary?
So would a universal cancer vaccine, but no one is acting like it's just around the corner.
I'm old enough to remember when "big data" and later "deep data" was going to enable us to find insane multi-variable correlations in data and unlock entire new levels of knowledge and efficiency.
AI as currently marketed is just that with an LLM chatbot.
I definitely don't think so. You're seeing companies who have a lot of publicity on the internet. There are tons of very successful SMBs who have no real idea of what to do with AI, and they're not jumping on it at all. They're at risk.
> They're at risk.
They're at risk of what? It's easy to hand-wave about disruption, but where's the beef?
Seriously. What should my local roofing company's AI strategy be, and what are they risking by not having one?
I can tell you for sure they did not have a Blockchain strategy, and they turned out just fine.
at risk of getting all my business because the big companies think I want to talk to a bot instead of a person lol
It's only a risk if there's a moat. What's the moat for jumping in early?
An interesting aspect that doesn't seem captured by TFA and similar articles is that it is not a specific kind of business that is being disrupted, but rather an entire genre of labor on which they all rely to varying extents: knowledge work. Furthermore, "knowledge work" is a very broad term that encompasses an extremely broad variety of skillsets (engineering, HR, sales, legal, medical...) And knowledge workers are indeed being rapidly disrupted by GenAI.
This is an interesting phenomenon that probably has no historical equivalent and hence may not have been fully contemplated in any literature, and so comparisons like TFA fall short of capturing the full implications.
Whether these companies see themselves an AI company seems orthogonal to the fact that they should acknowledge this sea-change and adapt. However, currently all industries seem to be thinking they should be an "AI company" and are responding by trying to stuff AI into any product they can. Maybe the urgency for them to adapt should be based on the degree to which knowledge work is critical to their business.
If "knowledge work" is under such threat from GenAI, it is revealing what extent it is actually a euphemism for "clerical work".
> Even with evidence staring them in the face, carriage companies still did not pivot, assuming cars were a fad.
I like this quote. But this analogy doesn’t exactly work. Withe this hype cycle, CEOs are getting out and saying that AI will replace humans, not horses. Unlike previous artisans making carriages, the CEOs saying these things have very clear motivations to make you believe the hype.
I'm not sure I agree much
Cynically, there's no difference from a CEO's perspective between a human employee and a horse
They are both expenses that the CEO would probably prefer to do without whenever possible. A line item on a balance sheet, nothing more
I think ceos that think this way are a self fulfilling prophecy of doom. If they think of their employees as cogs that can be replaced, they get cogs that can be replaced.
Doesn't matter
The median CEO salary is in the millions, they do not have to ever worry about money again if they can just stick around for one CEO gig for a couple of years
Granted, people who become CEOs are not likely to think this way
But the fact is that when people have so much money they could retire immediately with no consequences, they are basically impossible for a business to hold accountable outside of actual illegal activity
And let's be real. Often it's difficult to even hold them accountable for actual illegal activity too
> they are basically impossible for a business to hold accountable outside of actual illegal activity
False. CEOs are held accountable all the time. At the extreme end, research shows that 1 in 3 CEOs are fired within 18 months.
Being fired is not being held accountable, it is being terminated
> Being fired is not being held accountable, it is being terminated
Termination is the end result of a process
It is not unreasonable to think that is an accountability process of some sort...
You're talking about being accountable to shareholders
I am talking about being accountable to society
>At the extreme end, research shows that 1 in 3 CEOs are fired within 18 months.
And the size of the parachute they get when they're tossed from the plane? I know that there are many small companies with someone in a "CEO" position who might not be hugely compensated, but speaking of CEOs at major corporate ventures here, as is commonly understood when one talks about questions of executive responsibility (or lack thereof), let's be real on some actual severance figure averages for a clearer picture of consequences and "punishment".
If you’re playing at that level, you’re not thinking about subsistence living and never having to work again. You are driven by ego, by winning, by legacy. All three incentivize you to do well if your board consists of non-asshats. You are playing a multi-shot game.
I know, that's my point
Incentives for CEOs and Executives are just way different, which is actually a huge part of the problem we face in society
We are run into the ground for profit by people who think the purpose of life is to profit
Isn't this good for the CEO? if your employees aren't cogs then what do you do if they leave? the more replaceable they are the better bargaining power you have as a capitalist right
If you have all cogs, the scope of your business is almost always local. You’re running a lawn mowing business or a subway. And I’m not denigrating those businesses just making the point that they’re not the bulk of the economy. If you’re running a serious business part of your business may be cogs but there’s a very important layer of non cogs that you spend most of your time recruiting, keeping, and guiding. These folks are irreplaceable.
Moreover, there was at least one company which did pivot --- the Chevy Malibu station wagon my family owned in the mid-70s had a badge on the door openings:
>Body by Fisher
which had an image of the carriages which they had previously made.
the CEOs saying these things have very clear motivations to make you believe the hype
And conversely, people who fear that they might be replaced have very clear motivations to claim that AI is useless.
It's an interesting story but a weird analogy and moral. What would have been better if the other 3,999 carriage companies had all tried to make automobiles? Probably about 3,990 shitty cars and a few more mild successes. I'm not sure that's any better.
That's what I see with AI. Every company wants to suddenly "be an AI company", although few are sure what that means. Companies that were legitimately very good at a specific thing are now more interested in being mediocre at the same thing as everyone else. Maybe this will work out in the long tun but right now it's a pain in the ass.
At my workplace, when managers are done reading their business books, they go on a bookshelf in the break room.
There's an entire shelf devoted to "disruption."
>In each of the three companies that survived, it was the founders, not hired CEOs that drove the transition.
This is how VCs destroy businesses by bring in adult supervision. CEOs are not incentivized to play the long game.
The difference between the mobility & transportation industry, whether it by carriage and horse, or motor car, was that it was in demand by 99% of the population. AI, on the other hand, is only demanded by say 5%-10% of the population. How many people truly want an AI fridge or dishwasher? They just want fresh food and clean dishes.
Great read!
I wonder if there is something noteworthy about Studebacker - yes, they were the only carriage maker out of 4000 to start making cars, and therefore the CEO "knew better" than the other ones.
But then again, Studebacker was the single largest carriage maker and a military contractor for the Union - in other words they were big and "wealthy" enough to consider the "painful transformation" as the article puts it.
How many of the 3999 companies that didn't acutally had any capacity to do so?
Is it really a lesson in divining the future, or more survivorship bias?
Agreed. The autombile was two innovations, not one. If Ford had created a carriage assembly line in an alternate history without automobiles, how many carriage makers would he have put out of business? The United States certainly couldn't have supported 4000 carriage assembly lines. Most of those carriage makers did not have the capacity or volume to finance and support an assembly line.
That's the part missing from TFA, there were thousands of auto 'startups', but only a handful survived the depression.
I might be a wealthy person if "my" company had survived the Depression.
Which company is that, you ask? My last name is Maxwell.
(But afaik, none of my ancestors owned or even worked for that car company.)
Also, the auto built on some technologies that were either invented or refined by the bicycle industry: Pneumatic tires, ball bearings, improved steel alloys, and a gradual move to factory production. Many of the first paved roads were the result of demand from bicyclists.
> He founded Buick in 1904 and in 1908 set up General Motors. ... In 1910 Durant would be fired by his board. Undeterred, Durant founded Chevrolet, took it public and in 1916 did a hostile takeover of GM and fired the board. He got thrown out again by his new board in 1920 and died penniless managing a bowling alley.
There is no hope, after all :(
I've listened to so many CEOs in various industries (not just tech) salivating at the potential ability to cutout the software engineering middle man to make their ideas come to life (from PMs, to Engineers, to Managers, etc.). They truly believe the AI revolution is going to make them god's gift to the world.
I on the other hand, see the exact opposite happening. AI is going to make people even more useful, with significant productivity gains, in actuality creating MORE WORK for humans and machines alike to do.
Leaders who embrace this approach are going to be the winners. Leaders who continue to follow the hype will be the losers, although there will probably be some scam artists who are winners in the short term who are riding the hype cycle just like crypto.
The shift described in the article is more about craftsmanship vs mass production (Ford's conveyor belt and so on) and disruption is not the right word as it took place over decades. Most people that started as coach builders could probably keep their jobs as fewer and fewer people started.
There were some classes of combustion engines that smaller shops did manufacture, such as big hot-bulb engines for ships and factories. Miniaturised combustion engines or electric motors are not suitable for craftsman-like building but rather standardised procedures with specialised machines.
The main mechanism is not "disruption" but rather a trend of miniaturisation and mass production.
Stepping back from the specifics these are stories of human nature.
We tag “complacency” as bad, but I think it’s just a byproduct of our reliance on heuristics and patterns which is evolutionarily useful overall.
On the other hand we worry (sometimes excessively) about how the future might unfold and really much of that is unknown.
Much more practical (and rewarding) to keep improving oneself or organisation to meet the needs of the world today withe an eye on how the world is evolving, rather than try to be some oracle or predict too far out (in which case you need to both get the prediction and the execution right!).
As an aside, it seems a recent fashion to love these big bets these days (AI, remember Metaverse), and to make big high conviction statements about the future, but that’s more to do with their individual specific circumstances and motivations.
I feel this at a personal level. I started as an Android developer and stayed so. Not venturing into hybrid/etc or even trying to be into iOS as well, let alone backend, full stack (let's not even begin to talk of AI) - while kind of always seeing this might happen. Now I see the world pass by kind of. I don't think it's always missing the future. Maybe a comfort zone thing - institutional or personal? Sometimes it's just vehement refusal to believe something. I think it's just foolish hope against the incoming tidal shift.
The historical part completely misses the first boom of EV from 1890s to 1910s besides mentioning that they existed.
The history of those is the big untold story here.
It doesn't help if you're betting on the right tech too early.
Clearly superior in theory, but lacking significant breakthroughs in battery reasearch and general spottyness of electrification in that era.
Tons of Electric Vehicle companies existed to promote that comparably tech.
Instead the handful of combustion engine companies drove everyone else out of the market eventually, not last gasoline was marketed as more manly.
https://www.theguardian.com/technology/2021/aug/03/lost-hist...
Yep. Too early is as bad as too late. The EV was invented but the supporting technology wasn't there.
Lots of ideas that failed in the first dotcom boom in the late 1990s are popular and successful today but weren't able to find a market at the time.
This reminds me of Mary Anderson [0], who invented the windshield wiper so early that her patent expired by the time Cadillac made them standard equipment.
[0] https://en.wikipedia.org/wiki/Mary_Anderson_(inventor)
I don't know if the problems at the company that I worked for, came from the CEO, or many of the powerful General Managers.
At my company, "General Manager" positions were the ones that actually set much of the planning priorities. Many of them, eventually got promoted to VP, and even, in the case of my former boss, the Chairman of the Board.
When the iPhone came out, one of my employees got one (the first version). I asked to borrow it, and took it to our Marketing department. I said "This is gonna be trouble for us."
I was laughed out of the room. They were following the strategy set down from the General Managers, which involved a lot of sneering at the competition.
The iPhone (and the various Android devices that accompanied it), ate my company for breakfast, and picked their teeth with our ribs.
A couple of the GMs actually anticipated the issues, but they were similarly laughed out of their rooms.
I saw the same thing happen to Kodak (the ones that actually invented digital photography), with an earlier disruption. I was at a conference, hosted by Kodak, and talked to a bunch of their digital engineers and Marketing folks.
They all had the same story: They were being deliberately kneecapped by the film people (with the direct support of the C-Suite).
At that time, I knew they were "Dead Man Walking." That was in 1996 or so.
There was an excellent thread(s? I think) about Nokia around these parts a few months back that covered this in detail by various commentators (perhaps you were one of them).
Wish I'd bookmarked them; some great reading in those
This one? https://news.ycombinator.com/item?id=42724761
History is full of examples of execs hedging on the wrong technology, arriving too early, etc.
"We're all in on Blockchain! We're all in on VR! We're all in on self-driving! We're all in on NoSQL! We're all in on 3D printing!" The Gardner Hype Cycle is alive and well.
Enjoyed the history, but don't get the premise. Has any tech been watched more closely or adopted faster by incumbents?
> The first cars were expensive, unreliable, and slow
We can say the same about the AI features being added to every SaaS product right now. Productization will take a while, but people will figure out where LLMs add value soon enough.
For the most part, winning startups look like new categories rather than those beating an incumbent. Very different than SaaS winners.
Interestingly, my grandfather worked as a mechanic at a family-owned Chrysler car dealership for 30 years that previously sold carriages. It's in their logo and they have one on the roof.
This somehow reminds me of Jack Dorsey and Howard Schulz.
Innovators Dilemma, mentioned here, is great. If you enjoyed this article, don't overlook that recommendation.
Kodak is, for me, a leading example of leader in an industry who were unable to disrupt themselves.
TV networks, relative to Netflix is another.
And who can forget BlackBerry?
All of the owners of the TV networks moved to streaming with varying degrees of success.
- Disney has owned ABC forever and Disney+ is fairly successful
- NBC is owned by Comcast and Comcast has moved more toward being a dumb pipe, streaming and is divesting much of its linear TV business.
- CBS/Paramount just paid off Trump and it yet to be seen what will happen to it
Articles like this are exercises in survivor bias.
Let's see a similar story for, say, dirigibles.
From the article:
_____
The first cars were:
- Loud and unreliable
- Expensive and hard to repair
- Starved for fuel in a world with no gas stations
- Unsuitable for the dirt roads of rural America
_____
Reminds me of Linux in the late 90s. Talking to Solaris, HPUX or NT4 advocates, many were sure Linux was not going to succeed because:
- It didn't support multiple processors
- There was nobody to pay for commercial support
- It didn't support the POSIX standard
>- Starved for fuel in a world with no gas stations
Actually, gasoline was readily available in its rôle as fuel for farm and other equipment, and as a bottled cleaning product sold at drug stores and the like.
>- Unsuitable for the dirt roads of rural America
but the process of improving roads for the new-fangled bicycle was well underway.
Linux won on cost once it was "good enough". AI isn't free (by any definition of free) and is a long way away from "good enough" to be a general replacement for the status quo in a lot of domains.
The areas where it does make sense to use, it's been in use for years, if not longer, without anyone screaming from the rooftops about it.
By the time Linux won it was better - by 2003 you could take a workload that took eight hours on some ridiculous Sun machine and run it in 40 minutes on a Xeon box.
“disruption doesn’t wait for board approval”
Great line.
The article seemed more apropos to the US automobile industry than SaaS.
Thing is, those companies can't do much if whole lines of business will become obsolete. Behind every company, there is a core competence that forms the value and the rest of the business is just a wrapper. When core competence is worthless, the company is just out. Even if they know it's coming there's little they can do. In fact, best thing they can actually do is turning the company into a milk cow to extract all value they can here and now, stopping all investments into future - will probably generate enormous profits for a few years. Extract them and invest into the wider stock market.
This kind of article has to be a subgenre of business writing.
Why didn't all the carriage makers (400+) become Ford, General Motors and Chrysler? Why didn't hundreds of catalogue sales companies become Amazon? Why didn't hundreds of local city taxi services become Uber and Lyfe.
Hint: there's hundreds on one side of these questions and a handful on the other.
Beyond the point that a future market doesn't necessary have space for present players, the "Oo, look how foolish, they missed the next wave" articles miss the point that present businesses exist to make money in the present and generally do so. If you're horseshoe maker, you may know your days are numbered but you have equipment and you're making money. Liquidating to jump into this next wave may not make any sense - make your product 'till demand stops and retire. Don't reinvest but maybe raise prices and extract all you can from the operation now. Basically, "failed to pivot" applies to startups that don't have a capital investment and income stream with a given technology. If you have those, speculative pivoting is ignoring your fiduciary duty to protect that stuff while it's making making even if the income stream is declining.
And sure, I couldn't even get to the part about AI this offended most economist part so much...
nice article, but then end with the brain dead "jump on [current fad]".
If this was published a few months ago, it would be telling everyone to jump into web3.
Yes, would have been a much better article if it told us how to be sure AI is the next automobile and that AI is not the next augmented reality, metaverse, blockchain, Segway, or fill-in-your-favorite-fad.
Has that ended well?
HN (not YC, who readily invest in blockchain companies) are usually about a decade out regarding blockchain knowledge. Paying 2-6% of all your transactions to intermediaries of varying value-add may seem sensible to you. That's fine.
Credit cards are not the only alternative to crypto currencies.
My bank transfers within the country cost me nothing to send or receive, for example.
Merchants aren't the customer target for credit cards, consumers are. Credit card payments are reversible and provide a reward. There are lots of options available that are better for merchants than credit cards (cash, debit cards, transfers, etc). But they all lose because the consumer prefers credit cards.
Cash isn't really great for merchants. You have to handle it, safeguard it, count it, get it to the bank. Many hands are involved that process and theft or loss can occur by any of them or by robbery/burglary. I don't know if it's a break-even on payment card fees but I bet it is close.
Yes, that's the varying value-add mentioned in the comment you're replying to. I pay 3.5% of every card transaction to Square. I don't get 3.5% cash/rewards back.
Do you get a discount for paying with cash (or blockchain)? In general the answer is no, meaning you aren't paying the 3.5% transaction fee, the merchant is.
That only happens in the US. Europe has much lower credit card fees and most countries have already figured out cashless low cost payments.