Y_Y 11 hours ago

Why use AI to make good content, when you can take unremarkable content and add the string "AI" to pretend it's interesting. You could have just done an old-school web search for this!

https://www.instructables.com/How-to-Build-a-Fusion-Reactor-... https://makezine.com/projects/nuclear-fusor/ https://fusor.net/board/viewtopic.php?t=3247

The "artificially intelligent" aspect is trivial.

  • Etheryte 11 hours ago

    The trick is to still use the same underlying information, but forwarded to you by a large language model, that way you can take most of the credit yourself. How unsensational of an article would it be, if you said you followed a step by step guide and built the thing the guide built before. But slap an LLM between yourself and the tutorial and now you can suddenly say you built something with the help of AI. Weird times.

    • phito 11 hours ago

      I've seen a lot of that happening recently. People claiming they did this and that, when they actually followed the steps from a LLM. I actually don't mind this, except for the part where they omit the use of a LLM to get all the credits to themselves.

      Reminds me of all these YouTubers making video essays parotting something they have just learned, without actually mastering the subject.

      • bloomingkales 10 hours ago

        I think we seek to credit the source for reasons that are not clear. Do you feel any which way if I suggest to you your carrot soup only tastes good because of the carrots you put in it? With the follow up question being, why don’t you admit you put carrots in it?

        We don’t really do this because there’s no concept of ownership of food to that degree. Maybe 200 years ago, where a farmer may chew you out since he toiled the soil to get you that carrot.

        So, for us to not be possessive of knowledge, we’d need to evolve. It’s not likely in our lifetime, but perhaps 500-1000 years down the line the social fabric will evolve to handle this, similar to food possession.

        Or I could be wrong, and we just have a bunch of naturally thieving crooks all over the place.

        • BoxFour 10 hours ago

          > I think we seek to credit the source for reasons that are not clear... We don’t really do this because there’s no concept of ownership of food to that degree.

          Cooking is not a great comparison, and it betrays your point more than anything. If you cook something particularly impressive or complex, people will almost universally ask about the recipe and where it came from.

          Origination is actually a pretty common topic of discussion for many things.

          • makapuf 10 hours ago

            Some fine restaurants here are very proud to tell you the meat comes from such and such farm, the vegetables from such farm by this people and the wine from this vineyard. That's even on the menu.

            • jajko 9 hours ago

              Here in Switzerland, even burger joints have this for all major ingredients (meat, potato, veggies, bun made by really local bakers etc.). Heck, even McDonald has it, which is the worst tasting & looking on the market, and not necessarily cheapest.

              If population cares about it enough companies adapt, even if 50km down the road in another country they sell lower quality with same name (EU generally is less strict re food quality, but both tower high above what US FDA permits).

        • vladms 10 hours ago

          > I think we seek to credit the source for reasons that are not clear.

          For me a source means I can verify some claims, find another opinion/presentation, are able to view other work based on that source.

          It is the difference between having links between web pages or having only independent web pages. I guess we can all agree there is value in having the information "X was based on Y".

          The reason people do not credit can also be that they don't add any value. If the original source is more complete, more correct and better presented then they might "loose" their perceived value. Does this happen in all cases? Probably not. But it is my first instinct when I see it (happens a lot to "news" article as well for example, when talking about papers, university announcements, etc, things that could be easily "linked")

      • NoRagrets 27 minutes ago

        As opposed to students who get the steps from other students/teachers/books/youtube/google.

        Why is using AI something to be ‘declared’?

      • pithanyChan 10 hours ago

        whitelabelled content became a thing when some idiot made the phrase "ugly (on the inside) artists steal" or something popular ...

      • lupusreal 10 hours ago

        There are so many relatively popular youtube channels making 10 to 20 minute videos about interesting subjects which say nothing more than the Wikipedia page for that topic. Whole channels, well liked by the algorithm, built off paraphrasing Wikipedia without citing it.

        Being generous to them, some of them do a good job of picking topics I wouldn't have thought to look up myself. They probably do spend a lot of time reading through diverse subjects looking for the interesting ones.

        • bloomingkales 10 hours ago

          It’s probably low effort scripts that are performed by influencers. The scripts are probably not written by the influencers because the influencer usually brings the looks, voice acting, and charm.

          • lupusreal 10 hours ago

            Some of them certainly, that beard and thick rimmed glasses guy with a dozen channels comes to mind (I don't know his name, but other people have given me the same description of him). Others are pretty weird looking nerdy guys tbh, I believe they probably read Wikipedia a lot.

    • notahacker 10 hours ago

      Yeah. The people who conduct bedroom experiments "with AI" are basically the same people who would have conducted bedroom experiments with the help of web pages and YouTube five years ago, or with books if they could get hold of them 30 years ago.

      And actually, I'm not sure the switch from one ubiquitous digital format to another lossier one is the big step change here....

    • SecretDreams 10 hours ago

      I wonder if this is better for society somehow? I found even getting people to use a search engine to find a link to figure something out was like pulling teeth. Maybe LLMs will teach people how to use search engines? Also, while bypassing the various ads on the enshitified internet (for now)?

  • Maken 10 hours ago

    This project seems complex enough that following a guide is not enough, and you will likely have to swap unavailable with equivalent ones, for which you need at least a basic understanding of what you are doing.

    HudZah is seemly using the AI as a search engine for the reference materials he collected for the project, which is a legit use of the technology.

    • SecretDreams 10 hours ago

      > HudZah is seemly using the AI as a search engine for the reference materials he collected for the project, which is a legit use of the technology.

      So how would we feel if the headline for this (and many other articles) was:

      X uses search engine to find data required to do Y?

    • asddubs 7 hours ago

      his fusor actually looks much less scrappy than all of the ones linked, with proper vacuum fittings, etc., so I'm not sure I agree

  • Animats 9 hours ago

    He seems to be following the Instructable rather closely. Or the LLM was following the Instructable.

    See fusor.net.[1] It's unlikely this rig is doing any fusion. It's just a plasma created by high voltage, like a neon lamp. He's not putting in deuterium gas. He's not detecting neutrons. Most of the people who try to do this get the blue glow, but not neutrons.

    The main hazard is then the high voltage.

    These things are not energy producers. It takes about a billion times as much energy input as comes out in neutrons. They can be useful neutron sources for imaging and research.

    [1] https://fusor.net/

  • jdietrich 11 hours ago

    That's like saying "we don't need teachers because we've got books". A how-to guide has to assume a certain level of knowledge, so will inevitably be insultingly basic for some readers or completely impenetrable for others. A static document can't elaborate on something you don't understand or fill in gaps in your knowledge. It can't help you debug a problem or look at your setup and point out where you might have gone wrong.

    We now have good evidence that AI-assisted learning can be substantially more effective than traditional methods, at incredibly low marginal cost.

    https://news.harvard.edu/gazette/story/2024/09/professor-tai...

    https://blogs.worldbank.org/en/education/From-chalkboards-to...

    • Y_Y 10 hours ago

      > That's like saying "we don't need teachers because we've got books".

      I respectfully disagree. I don't think it's wrong or useless to get an LLM to help, I recently did a similar project myself (even though I manually fact-checked all the high-voltage stuff).

      If you find a guide that explains too much then you can skip the parts you know. If it doesn't explain something you don't know yet then you recursively look that stuff up. It doesn't matter if it's a book or a teacher or a search engine or an LLM.

      It's just not good journalism here, because evidently this project has been done lots of times in similar circumstances without LLMs.

      • fsloth 10 hours ago

        >evidently this project has >been done lots of times in >similar circumstances without >LLMs.

        LLMs help organizing knowledge and research in a novel domain. The point is not “can you do it”. LLMs are great at giving vanilla standard answers. Which is exactly what you want when researching a novel domain. I’ve been dipping into novel domains for 25 years and LLM:s make it so much more pleasant.

llm_trw 11 hours ago

>HudZah was told that he could be killed by the high voltage, X-ray radiation and possibly other things. This only made him more excited. “My whole intention was, ‘If I fuck up, I’m dead, and this is why I should do it,’” he said.

It sounds like he got exactly what he wanted to.

>I must admit, though, that the thing that scared me most about HudZah was that he seemed to be living in a different technological universe than I was. If the previous generation were digital natives, HudZah was an AI native.

The one thing I'm somewhat ambivalent about is that LLMs are extremely atomizing. It's incredibly easy to go offline for months on end when you're talking to your computer.

If I got hit by a bus in 2005 when I was contributing to Linux and Postgres there were people who would pick up what I was doing and carry it forward.

If I get hit by a bus today, unless someone went through my chats, no one would really have any idea what I'd been working on and carry it on. I have a suspicion that a ton of the best and brightest have gone dark for this reason in the last two years.

  • stevage 10 hours ago

    Wow, I've never heard anything like this before. Is this really a thing? People disappearing because they're so deep down the rabbit hole with AI and don't need to talk to anyone?

    • llm_trw 9 hours ago

      I know a number of people who were building AI systems that either completely dropped off the face of the earth or show up once every few months with an update then disappear again. Granted the majority of them are perverts whose main use case is mixed reality wifus, but still.

      I can't exactly blame them.

      In the latest furor over deepseek r1 the conversation online was _substantially_ worse than what you'd get from feeding the original paper into r1 and talking to it.

      This was the first time where I genuinely wondered what the point of reading groups, message boards and similar is. A model that you can run locally for $6,000 at 20tokens/s beat however many thousands of people because it actually spent the time to read what it talked about.

      • asddubs 7 hours ago

        comment sections where no one read anything but the headline having the same shallow discussions over and over again are a real thing, but I would wager to say that feeding an AI the paper and then having a discussion with it is just a different kind of shallow. You might as well just read the paper, the AI won't have any insights. The solution isn't AI, the solution is not talking to people who are substantially less informed than you and not really interested in becoming more informed. Hacker news is sometimes good and sometimes not, it depends heavily on the topic

    • pjc50 9 hours ago

      How would you tell?

      "Hikikomori" is not a new thing, long pre-dates AI, but I think similar things are starting to be observed outside of Japan. Internet-enabled entertainment competes with the real world and occasionally achieves total victory.

      https://pmc.ncbi.nlm.nih.gov/articles/PMC4912003/

    • jajko 9 hours ago

      Some people are... weirder for the lack of better word than one can imagine. We always project the world around us on our experiences and view of the world, and always (at least by default) project other people and their actions as those done by us, judge by our values etc.

      Suffice to say, that's not how you will understand behavior of others, especially in non-trivial situations, other cultures and so on. Just accept people are different and you may very well never understand how they see rest of the world and others.

latexr 11 hours ago

You “must weep”? All you’ve demonstrated were large amounts of laughter in the video and unfettered enthusiasm and encouragement in the text. Where exactly is the weeping?

  • otikik 11 hours ago

    Don't ask the human!

    Paste the post text to chatgpt and ask chatgpt instead.

    Then weep.

    • raverbashing 11 hours ago

      Can AI weep for us?

      • Lanolderen 11 hours ago

        "I can’t physically weep, but I can vibe with the emotions and listen if you need to vent. What's going on?" Not getting replaced yet.

      • latexr 11 hours ago

        Sounds like a short story Harlan Ellison would write.

        • bananaflag 11 hours ago

          I have no mouth and I must weep

          • otikik 11 hours ago

            AI have no moth and AI must weep

            • lproven 10 hours ago

              Are AIs lepidopterists? Do they manifest interest in butterflies and their relatives? Inquiring minds want to know...

              • otikik 8 hours ago

                Heh. Moth.

  • newswasboring 11 hours ago

    Is crying for joy not a universal thing? Its fairly common in my culture.

ngt1287 11 hours ago

The article sounds like an elaborate infomercial:

"It also excited me. Just spending a couple of hours with HudZah left me convinced that we’re on the verge of someone, somewhere creating a new type of computer with AI built into its core. I believe that laptops and PCs will give way to a more novel device rather soon."

"I’m not sure that people know what’s coming for them. You’re either with the AIs now and really learning how to use them or you’re getting left behind in a profound way."

It would be worth following the money of this submarine article.

stevage 10 hours ago

By far the most interesting part of this post is this stuff:

> It’s more that I was watching HudZah navigate his laptop with an AI fluency that felt alarming to me. He was using his computer in a much, much different way than I’d seen someone use their computer before, and it made me feel old and alarmed by the number of new tools at our disposal and how HudZah intuitively knew how to tame them.

I often feel like a complete newb with AI tools, but I don't really know how to level up. I'd love to watch someone like this, just to see what's possible.

  • halfmatthalfcat 10 hours ago

    It’s hyperbole. The way we use to dumpster dive through Google, SO, etc, now we’re just adding LLMs into the mix. Same shit, different browser tab.

    • c-fe 9 hours ago

      I dont think its hyperbole. Im young and I have friends my age that also use these AI tools in ways that seem borderline nauseating to me. A friend of mine built a whole polished project page for his app with cursor, and when I watched him for a bit I was surprised how fast he interacted with cursor, claude and chatgpt. Of course it wasnt perfect and I cringe at the idea of creating a project with tons of AI generated files I dont understand, but still, I felt like my grandma when she watches me use my phone or laptop.

      • halfmatthalfcat 7 hours ago

        Take it from someone’s who’s been around for a while - app templating, boilerplate generators, etc have been around since antiquity. It’s just an iteration on that. This isn’t Neo plugged into the matrix, don’t mistake evolution with revolution.

j16sdiz 10 hours ago

I don't know what made this a story.

He knew what a fusor is, knew how to find more information, have contact with hobbies groups already, been warned how dangerous it can be.

At this stage, AI is just a glorified search engine.

lucianbr 11 hours ago

> some people are going to be in for a very uncomfortable time in short order

> it made me feel old and alarmed by the number of new tools at our disposal

> I believe that laptops and PCs will give way to a more novel device rather soon.

> So, er, like, good luck if you’re not paying attention to this stuff.

All I see is some serious effort to hoist some FOMO on me. "Something big is coming soon" is repeated ad nauseam in this article and many many others.

  • gitaarik 11 hours ago

    This guy must be a newcomer in AI land.

Havoc 9 hours ago

Wish he’d elaborate on the tools model. Article basically amounts to „he does ai magic with his computer“

feverzsj 11 hours ago

*glorified search, but have to be examed by manual search.

Maken 10 hours ago

It is a bit unrelated, but I cannot understand why the video in the article is mirrored.

test1235 11 hours ago

FYI:

FortiGuard Intrusion Prevention - Access Blocked Web Page Blocked

You have tried to access a web page that is in violation of your Internet usage policy. Category Pornography URL https://www.corememory.com/

  • latexr 11 hours ago

    I went to check what FortiGuard is:

    > FortiGuard Labs provides (…) AI-powered threat intelligence (…)

    Ah, that explains it. The article is just a Substack with a custom domain, this sounds like an error on their part, not something the author can or should concern themselves with.

    • brabel 10 hours ago

      It's AIs all the way down.

fzeroracer 11 hours ago

It seems like every now and then we get someone doing dangerous science experiments in their backyards that threaten not only themselves but their neighbors as well. Brings to mind David Hahn, only worse since with AI the odds of it hallucinating a step and causing a serious problem is much higher.

  • pololeono 11 hours ago

    I don't know, I could imagine that this changes when AI becomes more sensible than the average internet advice.

    Also I would like to see some evidence how dangerous the experiment with the AI inspired Fusor actually was. I recently read here "hiking in jeans" is dangerous.

    • RansomStark 11 hours ago

      I'd be interested in how an LLM could become more sensible than the average internet, they are by definition the average of the internet. I'm waiting for the next major innovation, and given AI's history I might be waiting a long time.

      Fusors are somewhat dangerous, they use extremely high voltage, in the thousands to hundred thousand volt range. x-rays become an issue above around 30,000 volts, but they are frequently made by high school students, and I'm not aware of any deaths.

      lots of details available here: https://fusor.net/board/viewtopic.php?t=4843

      • lupusreal 10 hours ago

        > they are frequently made by high school students, and I'm not aware of any deaths.

        That's been done no more than a few dozen times I think? Maybe less than that. I think it's a rare enough activity that the accident rate simply hasn't been probed enough.

        Wood burning with microwave transformers is notorious for getting people killed, but how many people does it kill relative to how many people have tried it? Maybe a handful few out of a hundred? On the other hand, kids building fusors are probably smarter than the average public, to whom wood burning with transformers is frighteningly accessible. I don't think teenagers building fusors is quite that dangerous, but I don't think we have enough data to call it a statistically safe activity.

      • exe34 11 hours ago

        > they are by definition the average of the internet.

        Are you referring to base models?

        Nowadays they also train on stolen books and are further "aligned" based on feedback. I imagine they are already learning to teach based on feedback from users.

        • RansomStark 10 hours ago

          To be honest, I was using internet as shorthand for average of human knowledge, on the basis that most books, peer reviewed articles, and everything else is already on the internet, even if they exist only in the more unsavory corners (I've seen nothing to suggest the FM producers were / are much bothered about where the data is from).

          But yes referring to base models. I'm also not convinced that the average book is any more trustworthy than the average webpage, whether that be a purely technical book, where you really need to webpage of errata to be able to use the examples. Or the more pop-sci books that cherry pick data and jump to completely unfounded conclusions (I'm thinking of the ancient engineers - aliens built the pyramids books).

          The feedback is great and might work in some areas, technical knowledge. But once you step outside of the physical sciences and engineering, you don't so much end up with better quality information, just a curated experience that aligns with the model owners (think DeepSeek and Tiananmen square)

          • exe34 5 hours ago

            the nice thing about books, especially STEM ones, is that you can tell if there's a problem because there will be inconsistencies. so even without the errata, you can fuzz until it all matches.

newswasboring 11 hours ago

This paragraph was very uncanny valley for me.

> Eventually, however, HudZah wore Claude down. He filled his Project with the e-mail conversations he’d been having with fusor hobbyists, parts lists for things he’d bought off Amazon, spreadsheets, sections of books and diagrams. HudZah also changed his questions to Claude from general ones to more specific ones. This flood of information and better probing seemed to convince Claude that HudZah did know what he was doing, and the AI began to give him detailed guidance on how to build a nuclear fusor and how not to die while doing it.

jacknews 10 hours ago

"AI flop house."

I'm really not sure what to make of that.

majke 12 hours ago

> He was using his computer in a much, much different way than I’d seen someone use their computer before, and it made me feel old and alarmed by the number of new tools at our disposal and how HudZah intuitively knew how to tame them. It also excited me. Just spending a couple of hours with HudZah left me convinced that we’re on the verge of someone, somewhere creating a new type of computer with AI built into its core. I believe that laptops and PCs will give way to a more novel device rather soon.

ArchitectAnon 10 hours ago

I was looking for a fire standard; there's a particular fire standard, BS 5824, that I don't have that is referenced on a manufacturers website, which they claim relates to fire curtains. For context, yesterday as an experiment, I asked Deepseek to summarise BS 5839 part 6, a standard I know quite well, and it did a pretty good job. Let's ask Deepseek to summarise BS 5824...

"*BS 5824:2013* is a British Standard titled "Wall and floor tiling – Design and installation of ceramic, natural stone, and mosaic tiling in normal conditions – Code of practice". It provides guidelines for the *design, materials, installation, and testing* of tiling systems in interior and exterior applications..."

What? OK let's check the standard on the website of the actual body that publishes them: BSIgroup.com

"BS 5824:1980 Specification for low voltage switchgear and controlgear for industrial use. Mounting rails. C-profile and accessories for the mounting of equipment... Cost £149"

Oh shit the manufacturer's website is completely wrong and so is AI. They literally have no clue what they are talking about. 1. Let's not specify their fire curtains in my building. 2. Don't trust the AI.

My conclusion: If the info you need to do your job is behind a paywall or only in expensive textbooks, then AI hasn't seen it and it will make something up that's probably wrong, and probably don't get it to write your website or you will look like an idiot...

hsuduebc2 11 hours ago

I love his enthusiasm.

GistNoesis 11 hours ago

The elephant in the room is that this young man is indeed doing the right thing, he is just too poor to buy the right tools.

How do you think the "real scientists(TM)" work : they use AI tools too.

Do you really think you can design a tokamak Stellarator with pen and paper ?

What good engineers do is click the "topological optimization" button on their physical simulator, and then they build the machine according to the plan the computer make.

Do you really think deep-seek can't use your COMSOL or Ansys multiphysics tool ?

Finite element Method was invented in the 1950s, our more modern AIs use some variants of Physics-Informed Neural Networks to solve the differential equations of physics.

LLM without reinforcement learning won't invent your flying saucer from reading stuff on the internet, but let your local AI play for a day with a MHD simulator https://www.jp-petit.org/science/mhd/m_mhd_e/m_mhd_e.htm and the sky is not the limit.

  • otikik 9 hours ago

    > Do you really think you can design a tokamak Stellarator with pen and paper ?

    No, but you can draw a very nice strawman on pen an paper.

    > The "topological optimization" button on their physical simulator,

    A engineering tool for topological optimization is similar to "AI tools" only in the sense that a big amount of numbers are crunched together. Saying that they "are the same" is like saying that a shark is the same as a kangaroo.

    Besides, for tokamak and others, I doubt that off-the-shelf tools were enough for them. I would bet that they had to build their own tools anyway.

    > Do you really think deep-seek can't use your COMSOL or Ansys multiphysics tool

    A monkey can "use" those tools, as long as they have bright buttons and he's conditioned to press them for food. That doesn't mean that I would trust the results it comes with.

    > but let your local AI play for a day with a MHD simulator ... and the sky is not the limit.

    The limit is still death, which can come much much faster than the sky.