Ask HN: Is anyone else burnt out on AI?
It’s not a post to deny that the field is moving and is very interesting to follow but it’s just become too much.
Every week there’s several new things, most examples are cherry picked, benchmarks are blown out etc.
I run a software company that has not added any AI features, mostly because the usefulness seems to fall below the bar for value that most of our features try to maintain.
I use ChatGPT personally for random searches and Cursor + whatever model is deemed best at the time but even with that it takes a lot of work to get something valuable out of it in day to day.
I feel like I’m losing my mind when I see startups posting 10m ARR numbers in just 6 months or whatever.
I’m hearing from VC’s churn is 15-30% in many of these companies and they’re far from profitable but the growth is just wild.
Do I succumb and just add yet another text generation that maps to an object like fill-in-the-blank app of the week?
It feels disingenuous but even companies I know and respect are adding questionable “agent” features that rarely work.
Anyway, how are you feeling?
I hate hearing about it because it triggers some anxiety about my role (SE) changing to the point where it's not something I have any interest in. Producing software in the corporate world is absolutely soul draining; endless meetings, direction from fools, overheads like mandatory compliance/learning, all of the rituals that come and go because someone with authority read some garbage on LinkedIn. Besides pay, the only redeeming quality of the job is that for some part of the day, I get to just be inside my head, working on problems and coming up with solutions. Sometimes a few of us team up and we figure things out together. The software is the fun and rewarding bit.
If AI lives up to its hype, which is a whole other subject, then I expect to see the two things I like about my job vanishing quickly - the pay, and the problem solving.
As not to take up another top-level post: I am also quite bored of people sharing anything created with AI. I remember when I first saw the AI Jerry Seinfeld video; I was genuinely surprised/amused at what AI was capable of. It's all completely uninteresting now, though. I have friends (some in IT..) sending me scores that ChatGPT awarded them based on what ChatGPT 'thinks' their moral value is. It seems some are completely taken by the appearance of intelligence.
It has at least been interesting to me to reflect on how I can still appreciate media that humans make when I find AI media so repulsive. I did not think I cared so much about what was behind the picture or video I was watching, or that someone spent real effort to make something. To be honest I still don't understand it - maybe it's none of those things.
> It has at least been interesting to me to reflect on how I can still appreciate media that humans make when I find AI media so repulsive.
A relevant analog is the arguments about whether independent bands were more real than those who had signed with labels -- about whether money/popularity corrupts art. I never took a side on that, but I do think that most music isn't worth listening too, simply because it's so saccharine and cliched.
So to generalize, Sturgeon's Law is evenly distributed: 90% of everything -- including sci-fi, music, and AI-generated stuff -- is worthless. 90% of AI content is slop, because it is prompted by people who have no taste whatsoever; not bad taste, just zero taste; not everyone is gifted. In the hands of people with refined taste (whether good or bad), they can use AI to produce that 10% of worthwhile AI stuff; but those with refined taste know how to keep AI content from distracting from the larger work, so you never know they are using AI at all.
I don't think society-wide refinement of taste is possible; Sturgeon's Law is here to stay. Instead, we need a corrolary to Sturgeon's Law, which provides a solution to the problem: you can't overturn Sturgeon's Law; you can only build filters to avoid the crap. I can't say how to build such filters, but we can start thinking about them.
I’m enjoying the ride so far.
I’m not working on AI, but working with AI.
The leverage feels dramatic for a solo founder in my shoes. I think it’s all the cross-domain context switching. Gemini 2.5 Pro for academic type research, ChatGPT 4o for rapid fire creative exploration, o1-pro for one-shot snippets. Copilot for auto-complete.
It’s exciting honestly. I don’t know where we’re going but I do feel free and in a solid strategic position having my own company and not still being a cog in the machine.
I'm actively seeking jobs that don't have me deal with AI. It's a rough market out there because I want to go counter to the hype.
There’s solutions looking for problems right now and big tech companies who are investing hundreds of billions and now having to sell it. Everyone is demanding AI being used to boost productivity.
It’s ironic though. VB6 macros in excel was a major productivity win. Point and click forms an MBA could whip up in 20 minutes. Software development libraries used to be much faster to develop for with far less boiler plate.
I’m enjoying the AI wave immensely because somehow we get to TRY everything that actually matters (the new models and assistants and such) immediately from our own laptops. It’s way more fun than VR or blockchain or mobile at least for me. And it’s not all hype, it works really well in some domains, it’s better than Google for many queries, and I closed three PRs today raised by Copilot Agent (rather than suffer the back and forth I took one over halfway through but the other two were one-shots of simple but annoying maintenance tasks).
As someone with a history of pathological demand avoidance: fuck your AI, I don't care if it's good or not, I'm never going to use it as long as it's being artificially hyped by increasingly unhinged idiots who are desperate for a return on their trillion dollar random word generator.
One problem with AI is that it's a winner takes all market, in the end. Training models is expensive, so we're all just building our castles in someone else's kingdom.
Another problem is that we're turning any problem into a black-box, which takes the fun out of problem-solving.
It may not be winner-takes-all. Study after study shows that the models are converging as they get bigger; their results are getting more similar to one another. The models themselves are becoming interchangeable commodities.
We can use OpenRouter to build agents with any LLM and switching your agent to a new model is a one-line code change. We can write MCP tools that work with most of the decent models.
Honestly, I think we may be entering a period where things start to decentralize and money starts to move towards startups building interesting tools and agent workflows instead of a handful of giant companies training frontier models.
How is it winner-take-all? Training models is expensive, yeah, but big companies are willing to do it, and if deepseek is any lesson, it’s not as expensive as we used to think. You might be building a castle in someone else’s kingdom, but you’ve got a few kingdom’s to choose from. None have a moat and none seem to be going away any time soon.
It’s cheaper to move from one model to another than it is to train a general purpose model yourself (to say nothing of domain-specific smaller models or anything open source.)
I’m not sure about problems turning into black boxes, LLMs are pretty explicit in my experience when producing a solution (good or bad.) _How_ they came about that solution _is_ a black box, but that’s not a new problem.
I use it every day, I feel like I'm managing a team, I no longer do boring tasks, I control the result, I refine, I correct, the thing must do 80% of my work, their only limit is memory, it's quite exhausting to compensate. My timeline pours me a torrent of news every week that no longer seems to say only one thing: "we will replace you" and astronomical MRRs that tell me hurry up, it's the same FOMO as crypto except that crypto has given nothing compared to this storm, the transformation is massive, we are beyond good and evil, the contradictions are at their peak, the bubble will burst, it heats up too much
[flagged]
We definitely need a "slow news" type of publication that analyzes developments in AI with a critical eye:
- Who does this move the needle for? How does it compare to how things are done now?
- How does the regular person benefit, if at all?
- What's likely to happen to pricing after the initial investor subsidization end? What does the price history of other 'unicorns' tell us? Airbnb and Uber used to be cheap once too.
- What is the valuation of "AI-first Company X" based on? Who are the insiders and what is their work background?
Too much AI news today is just parroting corporate press releases and CEO keynotes.
I think there is a bit of cognitive dissonance that comes with trying to build stuff with LLM technology.
LLM’s are inherently non-deterministic. In my anecdotal experience, most software boils down to an attempt to codify some sort of descision tree into automation that can produce a reliable result. So the “reliable” part isn’t there yet (and may never be?).
Then you have the problem of motivation. Where is the motivation to get better at what you do when your manager just wants you to babysit copilot and skim over diffs as quickly as possible?
Not a great epoch for being a tech worker right imo.
>LLM’s are inherently non-deterministic.
I'm not an ML guy but I was curious about this recently. There is some parameter that can be tuned to produce determinism but currently it also produces worse results. Big [citation needed], but worth a google if it's of interest. Otherwise in agreement with your post.
Temperature? Definitely tweaks the results…but I don’t know if “deterministic” is a term you can use in any way, shape, or form, in the context of LLMs.
https://www.ibm.com/think/topics/llm-temperature
Unless I’ve misunderstood something, setting a constant seed and a temperature of zero would give deterministic results.
Not good results necessarily, but consistent.
It's what happens when the hype is at maximum. It's really a bubble that's going to pop one day. Remember blockchains a few years ago?
Just relax and realize it's mostly FOMO: https://www.theregister.com/2025/05/06/ibm_ai_investments/
This feeling isn't uncommon, nor is the pattern that produces it. Any time something new crosses a certain threshold the band wagon effect kicks in, it seems to be everywhere for a while, and then... life goes on (sorry doomers).
I'm retired. So it allows me to have a layman's, hobbyist's interest but I can easily ignore the startup and VC cruft.
Frankly, as a "user", not a potential employee, I don't give much of a fuck about anything more than what I can do with the thing right now. (Which is quite a bit in fact.)
hyping hyped hype
> It feels disingenuous but even companies I know and respect are adding questionable “agent” features that rarely work.
Word. I’m not a huge Microsoft fan, but it feels like they’re just shoving ChatGPT/copilot down your throat every chance they get. It’s integrated into everything - it’s useful when it’s useful, okay, but it generally isn’t. It’s one more Microsoft’ism that you have to learn to tolerate or simply ignore.
I can’t tell if Nadella’s really betting the farm on it all, or if he’s just trying to leave his mark.
It’s just another hype cycle, they happen rather … cyclically.
I recommend ignoring them. Despite VCs trying to spend it into existence, we aren’t going to have another internet level event in information technology and the smartphone+laptop combo is peak personal computing.
I think we are near the crest of this wave but that just means the next one is coming, though.
Management at my company is prioritizing AI and genuinely believes adding random AI features into our products will add tremendous value. My pleas that we should maybe talk to customers to determine what they need has have not been heard.
I am having a lot of fun learning about generative AI. It is just a bit thankless because I know the stuff I am building will be dead on arrival. So, I will not get any praise regardless on how well I do my job, maybe even get blamed.
But hey, after all the junior devs have been starved because no one wants to hire them, I will make bank once the next AI winter comes and companies desperately look for people who can actually code.
If you have your own company you can just weather it out and invest in good talent. Really a good position to be in.
No, it saves me tons of time every day. It's fucking awesome.
Depending upon the niche that your software company services, you might be able to stand out in the market by not adding any AI features. At least if the HN crowd is anything to go by.
I think there are situations where AI as it currently exists is absolutely a value add, but often it does seem like it's been shoehorned into an existing product just to ride the latest trend.
The HN crowd often says they want something, but what they want tends not to stand out and be successful in the market. For example, HN people say they want a simple car with physical controls, or a basic non-smart TV, or a small smartphone. But either they're not successful, or manufacturers don't want to sell those.
No. I use it constantly and get a ton of value out of it.
Me too but I'm sick of the sight of it filling up my news feeds and mind-space.
Unironically, drop your news intake. Either back out of news feed platforms entirely, drop subscriptions that feature AI frequently, or install/configure filters that will hide AI-related content.
Damn can’t believe AI still gets into your line of sight when you shut your computer off and stop staring at your phone.
I'm more bemused than burnt out... kinda surprised the hype around it doesn't get called out more often by developers.
LLMs as coding assistants are undeniably time saving devices, especially when working in languages/libraries/platforms/frameworks you aren't already very familiar with, or when needing to generate something very boilerplatey as a one-off.
I am not calling the technology useless by any stretch of the imagination, but its still just so wildly overhyped right now.
It is a pretty common occurrence these days for me to have a blog post open from some "AI industry thought leader" talking about how all developers will be out of work in a year while at the same time I have a Gemini window open and I'm just watching it absolutely flail on relatively simple things like generating a database query or a regex (that is novel and not something that's scattered all over its training set like a simple email validator).
And Gemini 2.5 is, IMO, the best of the models when it comes to programming assistance (having replaced Claude 3.5 which was IMO previously the best), at least for the areas I touch (lots of kotlin/KMP/Android/etc).
As goofy as Gemini sometimes gets it is far less frustrating than asking Claude 4 a question and watching it write out a whole ass answer but then correct itself like 7 times before finally coming to a shitty answer that is worse than 2 of the ones it wiped out while blowing through most of its context window on its loop of indecisiveness.
And relatedly... color me completely unsurprised that this thread got dumpstered off the front page so quickly. Gotta keep pretending like the singularity is going to happen next week.
:D
On that last note, very weird that it’s now nuked from HN. Guess I’m glad I posted from a throwaway
[dead]
[flagged]