Remind me of a recent discussion we had among Stackoverflow moderator:
> “Think about it,” he continued. “Who discovers the edge cases the docs don’t mention? Who answers the questions that haven’t been asked before? It can’t be people trained only to repeat canonical answers. Somewhere, it has to stop. Somewhere, someone has to think.”
> “Yes,” said the Moderator.
> He leaned back. For a moment, restlessness flickered in his eyes.
> “So why wasn’t I told this at the start?”
> “If we told everyone,” said the Moderator gently, “we’d destroy the system. Most contributors must believe the goal is to fix their CRUD apps. They need closure. They need certainty. They need to get to be a Registered Something—Frontend, Backend, DevOps, Full stack. Only someone who suffered through the abuse of another moderator closing their novel question as a duplicate can be trusted to put enough effort to make an actual contribution”
What does “destroy the system” mean here?
The metaphor doesn't match very well here because stackoverflow is not selling new tape at a premium but giving them for free and reading a stackoverflow answer is harder than asking an LLM.
Could be that AI companies feeding on stackoverflow are selling tape at a premium, and if they tell you it's only supervised learning from a lot of human experts it's going to destroy the nice bubble they have going on around AGI.
Could also be that you have to do the actual theory / practice / correction work for your basal ganglia to "know" about something without thinking about it (i.e. learn), contrary to the novel where the knowledge is directly inserted in your brain. If everyone use AI to skip the "practice" phase lazily then there's no one to make the AI evolve anymore. And the world is not a Go board where the AI can learn against itself indefinitely.
If you have to ask, you aren't ready to know the answer. There are some things you have to figure out on your own. This is one of them.
Use it to train an "AI"? :)
Probably not the OPs intent though. I suspect there are a lot of ways to destroy the system.
Thanks - the OP’s site was a truly horrible experience
I dunno I just copied it into emacs. Another free short story to keep in my digital collection.
That was my exact reaction after opening this post...
I haven't seen any ads on the site - I guess AdNauseum works well :)
For some reason Safari's reader view skips a part of the page.
I've read this a long time ago, when I was a kid. Back then I thought about the education system and how it sometimes inhibits the creativity within the students. But right now, other comparison comes to mind - I don't know how relevant it is, though, so please don't judge it strictly.
Modern "AI" (LLM-based) systems are somewhat similar to the humans in this story who were taped. They may have a lot of knowledge, even a lot of knowledge that is really specialized, but once this knowledge becomes outdated or they are required to create something new - they struggle a lot. Even the systems with RAG and "continuous memory" (not sure if that's the right term) don't really learn something new. From what I know, they can accumulate the knowledge, but they still struggle with creativity and skill learning. And that may be the problem for the users of these systems as well, because they may sometimes rely on the shallow knowledge provided by the LLM model or "AI" system instead of thinking and trying to solve the problem themselves.
Luckily enough, most of the humans in our world can still follow the George's example. That's what makes us different from LLM-based systems. We can learn something new, and learn it deeply, creating the deep and unique networks of associations between different "entities" in our mind, which allows us to be truly creative. We also can dynamically update our knowledge and skills, as well as our qualities and mindset, and so on...
That's what I'm hoping for, at least.
What concerns me is that learning depth is more discouraged than ever. For a long time it's been discouraged, which is natural as we have a preference for simple things rather than difficult/complex things. But we're pushing much harder than ever before. From the way we have influencer education videos to the way people push LLMs ("you can just vibe code, no thinking required"). We've advanced enough that it's easy to make things look good enough but looks can be deceiving. It's impossible to know what's good enough without depth of knowledge, without mastery.
No machine will ever be sufficient to overcome the fundamental problem: a novice is incapable of properly evaluating a system. No human is capable of doing this either, nor can they (despite many believing they can). It's a fundamental information problem. The best we can do is match our human system, where we trust the experts, who have depth. But we even see the limits of that and how frequently they get ignored by those woefully unqualified to evaluate. Maybe it'll be better as people tend to trust machines more. But for the same reason it could be significantly worse. It's near impossible to fix a problem you can't identify.
It is from my observation that more and more companies, even FAANG, not only encourage fast iteration with LLM tools, but discourage true study and thinking through. The later is inevitably slow, which is not favourable for today’s fast iteration world. This makes me think that it is so important to get into the right team, otherwise one runs the risk of not properly thinking and experimenting again.
So how do we find or make the right teams?
The incentive and loss function are pointing to short term attention and long term amnesia. We are fighting the algorithms.
I think low level programming or anything critical is still relatively safe. That’s where I wish I could be, but still very far away from.
Ironically, when machine learning is getting “deeper & deeper”, human learning is getting more and more impatient and shallow.
I have been searching “Vibe coding” videos on YouTube that are not promoting something. And I found this one and sat down and watched the whole three hours. It does take a lot of real effort.
I'm a machine learning researcher myself and one of the things that frustrates me is that many of my peers really embrace the "black box" notion of ML models. There are plenty of deep thinkers too, but like with any topic the masters are a much smaller proportion. Just a bit of a shame given that I'm talking about a bunch of people with PhDs...
As for my experience vibe coding is that it is not so much vibing but needing to do a lot of actual work too. I haven't watched that video you linked but that sounds to reflect my actual experience and that of people I know offline.
Since the JavaScript and Python worlds are heavily polluted by LLMs, I start to look into Rust and Cargo ecosystem. Surprisingly it picked up the pace as quickly as possible.
Once Rust can be agentic coded, there will be millions of mines hidden in our critical infrastructure. Then we are doomed.
Someone needs to see the problem coming and start to work on the paths to solution.
The mines are already being placed. There are plenty of people vibe coding C programs. Despite C documentation and examples being more prolific than rust, well... C vulnerabilities are quite easy to create and are even in those examples. You can probably even get the LLMs to find these mines, but it'll require you to know about them.
That's the real scary part to me. It really ramps up the botnets. Those that know what to look for have better automation tools to attack and at the same time we're producing more vulnerable places. It's like we're creating as much kindling as possible and producing more easy strike matches. It's a fire waiting to happen.
I did a toy experiment on a pretty low level crate (serde) in Rust ecosystem, to run a simple demonstration from their website pulling in 42M of dependencies.
I know this is orders of magnitude smaller than npm or pip, but if this is the best we can get 50 years since 70s UNIX on PDP-11, we are doomed.
It amazes me how much we've embraced dependency hell. Surely we need some dependencies but certainly we're going overboard.
On a side note, I wonder how much of this is due to the avoidance of abstraction. I hear so many say that the biggest use they get from LLMs is avoiding repetition. But I don't quite understand this, as repetition implies poor coding. I also don't understand why there's such a strong reaction against abstraction. Of course, there is such a thing as too much abstraction and this should be avoided, but code, by its very nature, is abstraction. It feels much like how people turned Knuth's "premature optimization is the root of all evil" from "grab a profiler before you optimize you idiot" to "optimization is to be avoided at all costs".
Part of my questioning here is that as the barriers to entry are lowered do these kinds of gross mischaracterizations become more prevalent? Seems like there is a real dark side to lowering the barrier to entry. Just as we see in any social setting (like any subreddit or even HN) that as the population grows the culture changes significantly, and almost always to be towards the novice. For example, it seems that on HN we can't even make the assumption that a given user is a programmer. I'm glad we're opening up (as I'm glad we make barriers to entry lower), but "why are you here if you don't want to learn the details?" How do we lower barriers and increase openness without killing the wizards and letting the novices rule?
This is what people hope the AGI will replace.
A very nice story, and an interesting reflection on the education system.
Also, and this is just an aside, but “the protagonist who is too special for the sorting hat” is a bit of a trope in young adult literature at this point. Is this the first real instance of it? 1957. That’s a while ago! I don’t even know if the “sorting hat” trope was established enough to subvert at the time.
Not really an example of the trope, but suspect Asimov might have got some of his ideas from Huxley's Brave New World, where it turns out the occupation-segregated dystopia is actually run by an idealistic type who's committed to the system but finds nonconformists and forbidden literature really interesting study subjects to make it better, and exile to the Falklands is actually a reward, sort of...
[deleted]
Unlike hacks like Cline, Asimov gives the special character serious flaws like jealousy. The protagonist's skill is also merely rare, instead of unique, and his roommate seems to be on a higher level still.
> who is too special
"Fans are slans."
No one would have recognized any tropes in 1957 beyond Shakespeare. Even Joseph Campbell wasn’t popularized until decades later.
As mentioned, the word "trope" dates back to ancient times, although generally meaning rhetorical devices like similes and metaphors rather than in the "reused plot" sense generally used today. But even the ancients still recognized those. Aristotle's Poetics deals with plays in addition to poems, and he discusses what sort of plots work in tragedies.
>No one would have recognized any tropes in 1957 beyond Shakespeare.
Nope. Just within science fiction, early issues of Galaxy had many editorials denouncing/mocking science fiction stories with overused tropes, such as Western transposed to space, or babies being killed as aberrant after a nuclear war because they have ten fingers and toe.
Sorry, I can’t tell if this is sarcastic. Well I think it has a kernel of truth that overstates it for rhetorical flair.
I’m willing to believe the phrase “trope” wasn’t invented in 1957 if that’s what you are saying. But surely they had the idea of popular little trends in contemporary literature.
The must have known they were writing pulp sci-fi. At least when they got their copies they could feel the texture!
Trope comes from classical Latin.
It comes from Greek "tropos"
This is my favorite Asimov story. It's got a protagonist with compelling motivations, a society that has problems but also convincing reasons why they persist, and a great ending.
mine too, because one of my favourite sff tropes is that the more you regiment society, the more you rely on outsiders and those pushed to the edges for any real innovation.
People stuck following the rules are going to struggle to deal with, or come up with solutions too, problems that are outside the rules.
Two other Asimov stories that are similarly relevant to much of what is discussed on HN for similar reasons are “In a Good Cause—” and “The Dead Past”.
I don’t know of a link for the first. Here’s one for the second.
The Dead Past is one of my favorite Asimov stories. We don’t have the tech that’s in the story, but the idea of lost privacy is relevant today.
I am sort of questioning my use of LLMs again after, first reluctantly, starting to use them multiple times a day. This story seems like it was intended to be an allegory for LLM-use though I know it couldn't have been.
It's an allegory about trusting "best practices", standardized bodies of knowledge¹, and "that's the way it's always been done". Not that those things necessarily don't work, they do in the story as well as in real life, but they need to adapt to change and the story illustrates what happens when they harden from best practice into unquestioned dogma.
¹ There's even a BoK for software developers, the SWEBOK, but I've never met anybody who's read it.
I think it's more about social stratification than bodies of knowledge. The knowledge is treated as a class signifier, especially by the protanogist. In the bit with the friend, the new training he didn't have was practically useful, but, more than that, it sharpened the gap between the "haves" (went to a good school) and "have-nots".
It's also about hyperspecialization. A concept that was beginning to be noticed at the time.
Why could it not have been? LLM is just a reasoning machine something Asimov spent a lot of time thinking about.
I think using LLM or even vibe coding is fine for things you are not absolutely interested in but have to do it anyway.
Dr Antonelli said, “Or do you believe that studying some subject will bend the brain cells in that direction, like that other theory that a pregnant woman need only listen to great music persistently to make a composer of her child. Do you believe that?”
Apparently, Asimov was an early critic of the “Mozart in the womb” movement.
It isn't to make a composer out of a baby but to expose a growing brain to complex music. We have no proof it benefits brain development, but we also have no proof it does not.
I studied classical music and came from a challenged background which to be honest is a rarity in that field. Almost everyone I studied with has parents who specifically encouraged music education and had the means to help make that happen. I got mine from some gifted vinyl as a child and fell in love with the orchestra. If I was in this story I'd probably not have been recommended to be a Professional Composer (if social expectations were the equivalent of what Asimov is saying here.)
So yeah, I'm pro 'play Mozart to your baby' :)
I don’t think you can assign that meaning here one way or another. The context in the story at that point (IIRC) is that he’s sort of lying to the protagonist, or at least misleading him.
There's a similar story about a progression of robot repair devices --- which has to end in a "Master Robot Repairman" profession which is the folks who repair the robots which repair other robots.
Blanking on author and title, but read it a _long_ while ago, and it had a distinctly golden age feel --- maybe Murray Leinster?
There's something a little like this in Strata by Pratchett (which is lightly sending up Niven's Ringworld and a non-robot-related but similar idea there).
one of asimov's finest , a metaphor that continues to find relevance in my day to day existence - that the conclusions we so readily come to are assumptions made in the absence of the awareness of something more
Such a great ending. Really makes one wonder about the current AI hype of getting the machines to take over our work.
This story is set thousands of years in the future, and yet their social norms are broadly those of 1960s America, conspicuously minus the racism. Their notion of gender equality, for instance, is to segregate, but add "(and women)" after every few "men" (respectively "(and husbands)" after "wives"). Stubby Trevelyan smokes, and litters the cigarette butts. This has to be deliberate on the part of the author. I wonder what Ladislas Ingenescu, Registered Historian, has to say about the matter?… if, indeed, he has any original thoughts to share.
I read fantasy set a thousand years in the past, and yet the women are all individualistic and liberated, no women ever spend any time spinning thread, and the monks don't really believe in Christ. Ken Follett really tried with the monks, but although he clearly did a lot of research, it felt like it was alien to him. Br. Cadfael, for all of Ellis Peter's research still thinks like a modern. For that matter, maybe they had legitimately grasped it and I missed it because I was still Baptist when I read them, while medieval monks were obviously Roman Catholic. I've learned enough since then to know that Baptists don't understand Roman Catholics one bit.
I let my curiosity run and read the citizenship curriculum. While I in general agree to this curriculum, I shall argue that the most important thing for a citizen to learn, that should be on the top of list, is to push back when he thinks something is wrong. It is perhaps more important in nowadays.
Is this still in print, maybe as part of a collection? I tried to find it but couldn't. Many of his other works seem to be available as paperback, including a bunch of story collections.
I have it in print. As part of Isaac Asimov: The Complete Stories Volume 1 (Published by Harper Voyager)
Thanks, just went and bought it!
It's a great collection. Do check out "The Dead Past" as well (it's the first story in the version I have).
What the hell that was a good read. Ending was great (though the last line did confuse me)
Previously in the story it is mentioned that George as a child was curious about the etymology of the Olympics event and asked his father, only to be dismissed.
The callback at the end symbolizes his renewed curiosity. He is no longer ashamed of the way his mind works and if it makes him look different.
[dead]
[deleted]
[flagged]
Perhaps you should review the "Please don't complain about tangential annoyances", "Avoid generic tangents." and related sections of the HN guidelines. They're linked at the bottom of the page.
Go create something original instead of tryingto destroy the greatness "created by a white guy" in the past.
The page linked has some more information available, but its author (abelard?) cites from "Mein Kampf" later, naming the books author as "Adolph" (sic!).
Caution is advised.
He is very odd. The name is presumably a reference to Peter Abelard who was not a nice man (very clever, of course).
Nothing wrong per se with citing what someone you are writing about said about themselves. He has some very odd historical, economic and political theories, but a lot of them are rooted in common misconceptions.
Remind me of a recent discussion we had among Stackoverflow moderator:
> “Think about it,” he continued. “Who discovers the edge cases the docs don’t mention? Who answers the questions that haven’t been asked before? It can’t be people trained only to repeat canonical answers. Somewhere, it has to stop. Somewhere, someone has to think.”
> “Yes,” said the Moderator.
> He leaned back. For a moment, restlessness flickered in his eyes.
> “So why wasn’t I told this at the start?”
> “If we told everyone,” said the Moderator gently, “we’d destroy the system. Most contributors must believe the goal is to fix their CRUD apps. They need closure. They need certainty. They need to get to be a Registered Something—Frontend, Backend, DevOps, Full stack. Only someone who suffered through the abuse of another moderator closing their novel question as a duplicate can be trusted to put enough effort to make an actual contribution”
What does “destroy the system” mean here?
The metaphor doesn't match very well here because stackoverflow is not selling new tape at a premium but giving them for free and reading a stackoverflow answer is harder than asking an LLM.
Could be that AI companies feeding on stackoverflow are selling tape at a premium, and if they tell you it's only supervised learning from a lot of human experts it's going to destroy the nice bubble they have going on around AGI.
Could also be that you have to do the actual theory / practice / correction work for your basal ganglia to "know" about something without thinking about it (i.e. learn), contrary to the novel where the knowledge is directly inserted in your brain. If everyone use AI to skip the "practice" phase lazily then there's no one to make the AI evolve anymore. And the world is not a Go board where the AI can learn against itself indefinitely.
If you have to ask, you aren't ready to know the answer. There are some things you have to figure out on your own. This is one of them.
Use it to train an "AI"? :)
Probably not the OPs intent though. I suspect there are a lot of ways to destroy the system.
Link to the story without ads
https://www.inf.ufpr.br/renato/profession.html
Thanks - the OP’s site was a truly horrible experience
I dunno I just copied it into emacs. Another free short story to keep in my digital collection.
That was my exact reaction after opening this post...
I haven't seen any ads on the site - I guess AdNauseum works well :)
For some reason Safari's reader view skips a part of the page.
I've read this a long time ago, when I was a kid. Back then I thought about the education system and how it sometimes inhibits the creativity within the students. But right now, other comparison comes to mind - I don't know how relevant it is, though, so please don't judge it strictly.
Modern "AI" (LLM-based) systems are somewhat similar to the humans in this story who were taped. They may have a lot of knowledge, even a lot of knowledge that is really specialized, but once this knowledge becomes outdated or they are required to create something new - they struggle a lot. Even the systems with RAG and "continuous memory" (not sure if that's the right term) don't really learn something new. From what I know, they can accumulate the knowledge, but they still struggle with creativity and skill learning. And that may be the problem for the users of these systems as well, because they may sometimes rely on the shallow knowledge provided by the LLM model or "AI" system instead of thinking and trying to solve the problem themselves.
Luckily enough, most of the humans in our world can still follow the George's example. That's what makes us different from LLM-based systems. We can learn something new, and learn it deeply, creating the deep and unique networks of associations between different "entities" in our mind, which allows us to be truly creative. We also can dynamically update our knowledge and skills, as well as our qualities and mindset, and so on...
That's what I'm hoping for, at least.
What concerns me is that learning depth is more discouraged than ever. For a long time it's been discouraged, which is natural as we have a preference for simple things rather than difficult/complex things. But we're pushing much harder than ever before. From the way we have influencer education videos to the way people push LLMs ("you can just vibe code, no thinking required"). We've advanced enough that it's easy to make things look good enough but looks can be deceiving. It's impossible to know what's good enough without depth of knowledge, without mastery.
No machine will ever be sufficient to overcome the fundamental problem: a novice is incapable of properly evaluating a system. No human is capable of doing this either, nor can they (despite many believing they can). It's a fundamental information problem. The best we can do is match our human system, where we trust the experts, who have depth. But we even see the limits of that and how frequently they get ignored by those woefully unqualified to evaluate. Maybe it'll be better as people tend to trust machines more. But for the same reason it could be significantly worse. It's near impossible to fix a problem you can't identify.
It is from my observation that more and more companies, even FAANG, not only encourage fast iteration with LLM tools, but discourage true study and thinking through. The later is inevitably slow, which is not favourable for today’s fast iteration world. This makes me think that it is so important to get into the right team, otherwise one runs the risk of not properly thinking and experimenting again.
So how do we find or make the right teams?
The incentive and loss function are pointing to short term attention and long term amnesia. We are fighting the algorithms.
I think low level programming or anything critical is still relatively safe. That’s where I wish I could be, but still very far away from.
Ironically, when machine learning is getting “deeper & deeper”, human learning is getting more and more impatient and shallow.
I have been searching “Vibe coding” videos on YouTube that are not promoting something. And I found this one and sat down and watched the whole three hours. It does take a lot of real effort.
https://www.youtube.com/watch?v=EL7Au1tzNxE
I'm a machine learning researcher myself and one of the things that frustrates me is that many of my peers really embrace the "black box" notion of ML models. There are plenty of deep thinkers too, but like with any topic the masters are a much smaller proportion. Just a bit of a shame given that I'm talking about a bunch of people with PhDs...
As for my experience vibe coding is that it is not so much vibing but needing to do a lot of actual work too. I haven't watched that video you linked but that sounds to reflect my actual experience and that of people I know offline.
Since the JavaScript and Python worlds are heavily polluted by LLMs, I start to look into Rust and Cargo ecosystem. Surprisingly it picked up the pace as quickly as possible.
Once Rust can be agentic coded, there will be millions of mines hidden in our critical infrastructure. Then we are doomed.
Someone needs to see the problem coming and start to work on the paths to solution.
The mines are already being placed. There are plenty of people vibe coding C programs. Despite C documentation and examples being more prolific than rust, well... C vulnerabilities are quite easy to create and are even in those examples. You can probably even get the LLMs to find these mines, but it'll require you to know about them.
That's the real scary part to me. It really ramps up the botnets. Those that know what to look for have better automation tools to attack and at the same time we're producing more vulnerable places. It's like we're creating as much kindling as possible and producing more easy strike matches. It's a fire waiting to happen.
I did a toy experiment on a pretty low level crate (serde) in Rust ecosystem, to run a simple demonstration from their website pulling in 42M of dependencies.
https://wtfm-rs.github.io/wtfm-serde/doc/wtfm_serde/
I know this is orders of magnitude smaller than npm or pip, but if this is the best we can get 50 years since 70s UNIX on PDP-11, we are doomed.
It amazes me how much we've embraced dependency hell. Surely we need some dependencies but certainly we're going overboard.
On a side note, I wonder how much of this is due to the avoidance of abstraction. I hear so many say that the biggest use they get from LLMs is avoiding repetition. But I don't quite understand this, as repetition implies poor coding. I also don't understand why there's such a strong reaction against abstraction. Of course, there is such a thing as too much abstraction and this should be avoided, but code, by its very nature, is abstraction. It feels much like how people turned Knuth's "premature optimization is the root of all evil" from "grab a profiler before you optimize you idiot" to "optimization is to be avoided at all costs".
Part of my questioning here is that as the barriers to entry are lowered do these kinds of gross mischaracterizations become more prevalent? Seems like there is a real dark side to lowering the barrier to entry. Just as we see in any social setting (like any subreddit or even HN) that as the population grows the culture changes significantly, and almost always to be towards the novice. For example, it seems that on HN we can't even make the assumption that a given user is a programmer. I'm glad we're opening up (as I'm glad we make barriers to entry lower), but "why are you here if you don't want to learn the details?" How do we lower barriers and increase openness without killing the wizards and letting the novices rule?
This is what people hope the AGI will replace.
A very nice story, and an interesting reflection on the education system.
Also, and this is just an aside, but “the protagonist who is too special for the sorting hat” is a bit of a trope in young adult literature at this point. Is this the first real instance of it? 1957. That’s a while ago! I don’t even know if the “sorting hat” trope was established enough to subvert at the time.
Not really an example of the trope, but suspect Asimov might have got some of his ideas from Huxley's Brave New World, where it turns out the occupation-segregated dystopia is actually run by an idealistic type who's committed to the system but finds nonconformists and forbidden literature really interesting study subjects to make it better, and exile to the Falklands is actually a reward, sort of...
Unlike hacks like Cline, Asimov gives the special character serious flaws like jealousy. The protagonist's skill is also merely rare, instead of unique, and his roommate seems to be on a higher level still.
> who is too special
"Fans are slans."
No one would have recognized any tropes in 1957 beyond Shakespeare. Even Joseph Campbell wasn’t popularized until decades later.
As mentioned, the word "trope" dates back to ancient times, although generally meaning rhetorical devices like similes and metaphors rather than in the "reused plot" sense generally used today. But even the ancients still recognized those. Aristotle's Poetics deals with plays in addition to poems, and he discusses what sort of plots work in tragedies.
>No one would have recognized any tropes in 1957 beyond Shakespeare.
Nope. Just within science fiction, early issues of Galaxy had many editorials denouncing/mocking science fiction stories with overused tropes, such as Western transposed to space, or babies being killed as aberrant after a nuclear war because they have ten fingers and toe.
Sorry, I can’t tell if this is sarcastic. Well I think it has a kernel of truth that overstates it for rhetorical flair.
I’m willing to believe the phrase “trope” wasn’t invented in 1957 if that’s what you are saying. But surely they had the idea of popular little trends in contemporary literature.
The must have known they were writing pulp sci-fi. At least when they got their copies they could feel the texture!
Trope comes from classical Latin.
It comes from Greek "tropos"
This is my favorite Asimov story. It's got a protagonist with compelling motivations, a society that has problems but also convincing reasons why they persist, and a great ending.
mine too, because one of my favourite sff tropes is that the more you regiment society, the more you rely on outsiders and those pushed to the edges for any real innovation.
People stuck following the rules are going to struggle to deal with, or come up with solutions too, problems that are outside the rules.
Two other Asimov stories that are similarly relevant to much of what is discussed on HN for similar reasons are “In a Good Cause—” and “The Dead Past”.
I don’t know of a link for the first. Here’s one for the second.
https://xpressenglish.com/our-stories/dead-past/
The Dead Past is one of my favorite Asimov stories. We don’t have the tech that’s in the story, but the idea of lost privacy is relevant today.
I am sort of questioning my use of LLMs again after, first reluctantly, starting to use them multiple times a day. This story seems like it was intended to be an allegory for LLM-use though I know it couldn't have been.
It's an allegory about trusting "best practices", standardized bodies of knowledge¹, and "that's the way it's always been done". Not that those things necessarily don't work, they do in the story as well as in real life, but they need to adapt to change and the story illustrates what happens when they harden from best practice into unquestioned dogma.
¹ There's even a BoK for software developers, the SWEBOK, but I've never met anybody who's read it.
I think it's more about social stratification than bodies of knowledge. The knowledge is treated as a class signifier, especially by the protanogist. In the bit with the friend, the new training he didn't have was practically useful, but, more than that, it sharpened the gap between the "haves" (went to a good school) and "have-nots".
It's also about hyperspecialization. A concept that was beginning to be noticed at the time.
Why could it not have been? LLM is just a reasoning machine something Asimov spent a lot of time thinking about.
I think using LLM or even vibe coding is fine for things you are not absolutely interested in but have to do it anyway.
Dr Antonelli said, “Or do you believe that studying some subject will bend the brain cells in that direction, like that other theory that a pregnant woman need only listen to great music persistently to make a composer of her child. Do you believe that?”
Apparently, Asimov was an early critic of the “Mozart in the womb” movement.
It isn't to make a composer out of a baby but to expose a growing brain to complex music. We have no proof it benefits brain development, but we also have no proof it does not.
I studied classical music and came from a challenged background which to be honest is a rarity in that field. Almost everyone I studied with has parents who specifically encouraged music education and had the means to help make that happen. I got mine from some gifted vinyl as a child and fell in love with the orchestra. If I was in this story I'd probably not have been recommended to be a Professional Composer (if social expectations were the equivalent of what Asimov is saying here.)
So yeah, I'm pro 'play Mozart to your baby' :)
I don’t think you can assign that meaning here one way or another. The context in the story at that point (IIRC) is that he’s sort of lying to the protagonist, or at least misleading him.
There's a similar story about a progression of robot repair devices --- which has to end in a "Master Robot Repairman" profession which is the folks who repair the robots which repair other robots.
Blanking on author and title, but read it a _long_ while ago, and it had a distinctly golden age feel --- maybe Murray Leinster?
There's something a little like this in Strata by Pratchett (which is lightly sending up Niven's Ringworld and a non-robot-related but similar idea there).
one of asimov's finest , a metaphor that continues to find relevance in my day to day existence - that the conclusions we so readily come to are assumptions made in the absence of the awareness of something more
Such a great ending. Really makes one wonder about the current AI hype of getting the machines to take over our work.
Alt link of text only - no cruft http://employees.oneonta.edu/blechmjb/JBpages/m360/Professio...
This story is set thousands of years in the future, and yet their social norms are broadly those of 1960s America, conspicuously minus the racism. Their notion of gender equality, for instance, is to segregate, but add "(and women)" after every few "men" (respectively "(and husbands)" after "wives"). Stubby Trevelyan smokes, and litters the cigarette butts. This has to be deliberate on the part of the author. I wonder what Ladislas Ingenescu, Registered Historian, has to say about the matter?… if, indeed, he has any original thoughts to share.
I read fantasy set a thousand years in the past, and yet the women are all individualistic and liberated, no women ever spend any time spinning thread, and the monks don't really believe in Christ. Ken Follett really tried with the monks, but although he clearly did a lot of research, it felt like it was alien to him. Br. Cadfael, for all of Ellis Peter's research still thinks like a modern. For that matter, maybe they had legitimately grasped it and I missed it because I was still Baptist when I read them, while medieval monks were obviously Roman Catholic. I've learned enough since then to know that Baptists don't understand Roman Catholics one bit.
I let my curiosity run and read the citizenship curriculum. While I in general agree to this curriculum, I shall argue that the most important thing for a citizen to learn, that should be on the top of list, is to push back when he thinks something is wrong. It is perhaps more important in nowadays.
Is this still in print, maybe as part of a collection? I tried to find it but couldn't. Many of his other works seem to be available as paperback, including a bunch of story collections.
I have it in print. As part of Isaac Asimov: The Complete Stories Volume 1 (Published by Harper Voyager)
Thanks, just went and bought it!
It's a great collection. Do check out "The Dead Past" as well (it's the first story in the version I have).
Though it doesn't directly answer your question, isfdb.org is a great reference for publication history of SF: https://www.isfdb.org/cgi-bin/title.cgi?55700
Oh wow, that's amazing, thank you!
It's collected in Nine Tomorrows, most recently reprinted in 1989 per Wikipedia. Used copies may be found online.
I thought this post from Kyla Scanlon[0] did a good job of explaining that eventually the algorithms replace knowledge. Which is not a good thing.
0: https://kyla.substack.com/p/the-four-phases-of-institutional
Ah, I remember that story. Brilliant. Asimov was a wonderful writer.
What motivates you all to learn when you know that information about anything is easily accessible from anywhere ?
"Why?" That's generally what motivates me to learn. Information is merely the raw material for Understanding.
Agreed. The beauty is that there is always another why behind each why.
Another, less optimistic view of this same future is the short story "Pump Six" by Paolo Bacigalupi.
https://windupstories.com/books/pump-six-and-other-stories/
What the hell that was a good read. Ending was great (though the last line did confuse me)
Previously in the story it is mentioned that George as a child was curious about the etymology of the Olympics event and asked his father, only to be dismissed.
The callback at the end symbolizes his renewed curiosity. He is no longer ashamed of the way his mind works and if it makes him look different.
[dead]
[flagged]
Perhaps you should review the "Please don't complain about tangential annoyances", "Avoid generic tangents." and related sections of the HN guidelines. They're linked at the bottom of the page.
Go create something original instead of tryingto destroy the greatness "created by a white guy" in the past.
The page linked has some more information available, but its author (abelard?) cites from "Mein Kampf" later, naming the books author as "Adolph" (sic!). Caution is advised.
He is very odd. The name is presumably a reference to Peter Abelard who was not a nice man (very clever, of course).
Nothing wrong per se with citing what someone you are writing about said about themselves. He has some very odd historical, economic and political theories, but a lot of them are rooted in common misconceptions.