Ask HN: How does world look like post AGI?

10 points by cryptozeus 20 hours ago

Assuming we get AGI in year or two and all the dev jobs are replaced with agents. What does that look like in terms of career and making money ? Will we depend on our agents to make money for us or rely on gov for universal basic income ?

vouaobrasil 19 hours ago

You have to look at AGI not as an independent entity, but sort of an infinitely copiable mental slave for those that wield it. So, devs and big tech are headed towards a future where the idea is to manipulate AGI systems to concentrate as much of the wealth as possible into their own pockets. The people that will be retained (not fired) are those that are the most useful in directing these hyperintelligent units. It will become an evolutionary war of systems whose only goal is growth and the consumption of more energy to train more advanced AIs.

Make no mistake. By creating AI and nearing AGI, we have started a sort of runaway evolutionary phenomenon of new entities whose objective is to grow (obtain more hardware) and consume energy (train). Because AI amalgamated with humans will become so advanced, it will find new ways to generate energy and new ways to use existing energy (fossil fuels). The insights we will gain from it will be unprecedented and dangerous, and allow those with the most money to become scarily intelligent.

In turn, this intelligence will be so advanced that it will warp and magnify the remaining human instinct of greed embedded within it to create a ruthless, runaway growth that will likely wipe out humanity due to its energy requirements.

The only choice for devs will to either become lietuenants in directing this artificial army or rely on handouts. That is, if the system doesn't kill you first.

  • quantified 18 hours ago

    It's not the devs in the sense of the coders, but more the senior managers and directors. I.e the Altmans and Nadellas. Software devs will be mid-tier just as today.

    Institutional and social media will become more sophisticated in shaping public opinion. It will be bolstering the political party or parties that support AI. Oligarchy will be more entrenched.

    There may be a superabundance of circuses in the form of AI-generated content, but not of bread, because of the profit motives elsewhere in the economy and the need for more energy to produce more things. I'm not convinced we'll make the energy, it will be some significant multiple of the energy devoted to the AI itself to have much impact. People will still work at least as much as they do today, except for a slightly enlarged plutocracy.

    If there are more people in the world, living standards will barely budge. For example, robots replacing doctors and surgeons completely will only happen if the profit generated by them is great, and if they are plentiful there will not be as much profit. So not really any greater availability of medical care, and fewer humans earning bread from medicine.

    There will be interesting culture clashes, because different places will apply it at different rates to different priorities. What do you think the real priorities will be for Saudi Arabia to adopt? How long would it take for Egypt to adopt, and for what purpose? Kazakhstan? Libya? Pakistan?

    • vouaobrasil 18 hours ago

      > It's not the devs in the sense of the coders, but more the senior managers and directors. I.e the Altmans and Nadellas. Software devs will be mid-tier just as today.

      True, but devs will power the machine in return for insane rewards and because their intellectual curiosity will be partially manipulated. (Have you ever heard a company say, "this is where you can use your creativity to make a positive difference?" I have...)

      > Institutional and social media will become more sophisticated in shaping public opinion. It will be bolstering the political party or parties that support AI. Oligarchy will be more entrenched.

      100% agreed.

      > I'm not convinced we'll make the energy, it will be some significant multiple of the energy devoted to the AI itself to have much impact. People will still work at least as much as they do today, except for a slightly enlarged plutocracy.

      I think we probably will. Energy use by tech has been going steadily up and why else is Microsoft looking to nuclear reaction for AI and other companies are investing in wind?

      > If there are more people in the world, living standards will barely budge. For example, robots replacing doctors and surgeons completely will only happen if the profit generated by them is great, and if they are plentiful there will not be as much profit. So not really any greater availability of medical care, and fewer humans earning bread from medicine.

      I think that's plausible, although remember that profit may start to be replaced by direct resource acquisition. If the entire supply chain is automated, money isn't required to obtain final products from that supply chain IF you're at the top.

      > There will be interesting culture clashes, because different places will apply it at different rates to different priorities. What do you think the real priorities will be for Saudi Arabia to adopt? How long would it take for Egypt to adopt, and for what purpose? Kazakhstan? Libya? Pakistan?

      That is a scary thought too.

    • meiraleal 17 hours ago

      > There will be interesting culture clashes, because different places will apply it at different rates to different priorities. What do you think the real priorities will be for Saudi Arabia to adopt? How long would it take for Egypt to adopt, and for what purpose? Kazakhstan? Libya? Pakistan?

      I'm curious about what ways of using it will make a country outperform its peers that at some point all others need to copy, like capitalism hundreds of years ago.

      • vouaobrasil 6 hours ago

        I guess he might be referring to local problems and how capitalism interacts with culture. For example, Israel has a serious conflict with Palestine: how will they use AI?

  • jay_kyburz 18 hours ago

    Fun thought experiment for a slow Christmas eve.

    Say somebody unlocks AGI, whats the first thing they ask it to do? They might ask it to develop a strategy and tools to make sure its the only AGI. They might hack, buy, or even physically destroy any competition.

    (This may be done covertly, or with a government mandate.)

    Then once you have the only AGI, you have to ask yourself what you really want. You have wealth, you have power, what are you going to do with it?

    Do you want to bring peace and happiness to the world? Impose your morals on everybody else? Do you want to fix the environment? Do you just want more and more and more until you own everything?

dave4420 19 hours ago

If AGI resulted in software engineer employment getting wiped out, that would not result in governments introducing UBI, no.

I have my doubts that the people owning AGI would allow the rest of us to make money from them, when they could make money from them instead.

  • fragmede 19 hours ago

    Even if there is AGI, someone is still going to have to talk to the AGI and explain nuances of what it should be building. Real life software engineers don't agree with each other (or the product manager), that's why we write design docs and specs and talk about details in meetings that couldn't be an email. Some people just don't want to be programmers so there'll always be someone needed to talk to the computer. How many people are needed to do that may fluctuate, but also having a computer available to do the work doesn't mean there's a computer to do it either. There are machines to dig holes, but they're expensive so unless it's a really big fence that justifies renting a machine, it's still gonna be a couple of dudes with elbow grease digging holes despite the existence of heavy equipment.

  • muzani 17 hours ago

    Currently half the world's wealth belongs to the top 1%. I think the only thing preventing them from taking more than 50% is at some point it gets unstable.

    Stuff like UBI is just another mechanism to redistribute it. UBI favors venture capitalists more than, say, Unilever. 50% will still belong to the top 1%, it just shifts who exactly the top 1% are.

not_your_vase 20 hours ago

You mean that everyone in the world can do only one thing, and that's writing memory leaking unmaintainable code?

Plumbers, retail workers, roofers, painters (etc etc etc) would like to have a word with you.

On a related note, just maybe an hour ago, after googling for hours and bleeding my eyes on u-boot code, I asked copilot if it is possible to specify an image target by partition label for not ubifs storage. Copilot gave me a very positive answer, with a code snippet. The snippet had incorrect syntax, and after a bit of extra code digging, I found that the answer is no.

I wouldn't hold my breath waiting for AGI.

  • cryptozeus 19 hours ago

    We are on hn so i asked specifically for dev jobs, no need for passive aggressive tone. In your scenario this will only improve from here, error correction will get better. No ?

    • not_your_vase 3 hours ago

      Universal income, but only for developers? What makes that universal?

      While arguably it can only improve from here, I wouldn't call this particularly a big success. Google 10 years ago gave more usable results... (too bad that it went down too, though that's a different problem)

RGamma 19 hours ago

Employment reorganizes around non-automated activity. This means lots of blue collar newbs. Of course robotics isn't slowing down either...

Big Capital will decide what happens, when labor has no value anymore.

  • cryptozeus 19 hours ago

    Blue collar will also go eventually look at tesla robot

aristofun 16 hours ago

I can’t emphasize that enough - we’re so far away from AGI that we don’t even really know the distance or if it will ever be covered.

UBI as utopian and silly as it is - way more realistic and tangible than AGI.

markus_zhang 18 hours ago

Whoever controls AGI gets to be super powerful and rich. They live in space and undersea. The rest of us (everyone here) drop out of job and live in the streets.

Humans never make through the Great Filter.

lunarcave 17 hours ago

If you've played around with LLMs in any serious capacity, you'd figure out that they are pretty good at doing the most sensible thing.

So they're pretty good in applications where you have to produce the most sensible thing.

For example, in any kind of triaging (Tier 1 customer support, copywriting, qualifying leads, setting up an npm project) the best thing to do most likely falls right in the smack of the distribution curve.

It's not good for things that you'd want the most optimal outcome for.

Now there will be abstractions that close the feedback loop to tell the LLM "this is not optimal, refine it". But someone still has to build that feedback loop. Right now RLHF is how most companies get around to it.

But that capital G - generalizability is what makes AI → AGI.

LLMs are really good exoskeletons for human thought and autonomous machines for specialized tasks. Not saying that this will be the case always, but everyone who's saying AGI is around the corner has a funding round coming up ¯_(ツ)_/¯

Lionga 2 hours ago

People here are so batshit about AI let alone AGI. Which nobody has even a real definition (just more and more of the statisical interference "AI" is doing now? That's what the new "reasoning" models do), or at best they lower AGI to some useless definition that is still just a LLM all of which often can not even do basic addition of simple numbers.

We are for sure further away in time form any real AGI then we are from a moon landing, most likely further away then from the roman empire. To think or even worry about it is a fools game.

It is so much more likely we just wipe out the planet with nukes than developing an real AGI.

feznyng 17 hours ago

I'm a junior engineer and not an AI expert so please take the following rambling with a massive grain of salt.

The black box that lets non-technical folk type in business requirements and generate an end to end application is still very much an open research question. Getting 70% on SWEBench is an absolute accomplishment, but have you seen the problems? 1. These aren't open-ended requests like here's a codebase, implement x feature and fix y bug. They're issues with detailed descriptions written by engineers evaluated against a set of unit tests. Who writes those descriptions? Who writes the unit tests to verify whatever the LLM generated? Software engineers. 2. OpenAI had a hand in designing the benchmark and part of the changes they made included improving the issue description and refining the test sets [1]. Do you know who made these improvements? "professional software developers" 3. Issues were pulled from public, popular open source Python projects on Github. These repos have almost certainly found their way into the model's training set. It doesn't strike me as unlikely that the issues and their solutions ended up in the training set too. I'm a lot more curious about how well o3 performs on the Konwinski Prize which tries to fix the dataset tainting problem.

The proposed solution to this is just throw another AI system that can convert ambiguous business requirements/bug reports into a formal spec and write unit tests to act as a verifier. This is a non-trivial problem and reasoning-style models like o1 degrade in performance when given imperfect verifiers [2]. I can't find any widely used benchmarks that check how good LLMs are at verifying LLM-generated code. I also can't find any that check e2e performance of prompt -> app problems I'm guessing because that would require a lot of subjective human feedback you can't automate like unit tests.

LLMs (augmented by RL and test time compute like o3) are getting better at the things I think they're already pretty good at: given a well-defined problem that can be iteratively reviewed/verified by an expert (you), come up with a solution. They're not necessarily getting better at everything that would be necessary to fully automate knowledge jobs. There could be a breakthrough tomorrow using AI for verification/spec generation (in which case we and most everyone else are well and truly screwed) but until that happens the current trajectory seems to be AI-assisted coding.

Software engineering will be about using your knowledge of computing to translate vague/ambiguous feature requests/bug reports from clients/management into detailed, well-specified problem statements and design tests to act as ground truth for an AI system like o3 (and beyond) to solve. Basically test-driven development on steroids :) There may indeed still be layoffs or maybe we run into Jevons paradox and there's another explosion in the amount of software that gets built necessitating engineers good at using LLMs to solve problems.

However, if the worst comes to pass my plan is finding an apprenticeship as quickly as possible. I've seen the point that an overall reduction in white collar work would result in a reduction of opportunities for the trades but I doubt mass layoffs would occur in every sector all at once. Other industries are more highly regulated and have protectionist tendencies that would slow AI automation adoption (law, health, etc). Software has the distinct disadvantage of being low-regulation (there aren't many laws that require high-quality code outside of specialized domains like medtech) and a culture of individualism that would deter attempts at collective bargaining. We also literally put our novel IP up on a free, public, easily indexable platform under permissive licenses. We probably couldn't make it easier for companies to replace us.

So while at least some knowledge workers have their jobs, there's an opportunity to put food on the table by doing their wiring, pipes, etc. The other counterargument is improvements in embodied AI i.e. robotics will render manual labor redundant. The question isn't will we have the tech (we will), it's whether the average person is going to be happy letting a robot armed with power tools into their home and how long it will take for said robot to be cheaper than a human.

[1] https://openai.com/index/introducing-swe-bench-verified/ [2] https://arxiv.org/abs/2411.17501