Alt Text: an image of Agent Smith from The Matrix with the following text superimposed, “1999 was described as being the peak of human civilization in ‘The Matrix’ and I laughed because that obviously wouldn’t age well and then the next 25 years happened and I realized that yeah maybe the machines had a point.”

  • masterspace@lemmy.ca
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    13
    ·
    edit-2
    2 days ago

    When I heard that line I was like “Yeah, sure. We’ll never have AI in my lifespan” and you know what? I was right.

    Unless you just died or are about to, you can’t really confidently make that statement.

    There’s no technical reason to think we won’t in the next ~20-50 years. We may not, and there may be a technical reason why we can’t, but the previous big technical hurdles were the amount of compute needed and that computers couldn’t handle fuzzy pattern matching, but modern AI has effectively found a way of solving the pattern matching problem, and current large models like ChatGPT model more “neurons” than are in the human brain, let alone the power that will be available to them in 30 years.

      • Match!!@pawb.social
        link
        fedilink
        English
        arrow-up
        6
        ·
        2 days ago

        there’s plenty of reason to believe that, whether we have it or not, some billionaire asshole is going to force you to believe and respect his corportate AI as if it’s sentient (while simultaneously treating it like slave labor)

      • masterspace@lemmy.ca
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        6
        ·
        2 days ago

        There’s plenty of economic reasons to think we will as long as it’s technically possible.

    • 10001110101@lemm.ee
      link
      fedilink
      English
      arrow-up
      9
      ·
      2 days ago

      current large models like ChatGPT model more “neurons” than are in the human brain

      I don’t think that’s true. Parameter counts are more akin to neural connections, and the human brain has something like 100 trillion connections.

    • lowleveldata@programming.dev
      link
      fedilink
      arrow-up
      13
      ·
      2 days ago

      the previous big technical hurdles were the amount of compute needed and that computers couldn’t handle fuzzy pattern matching

      Was it? I thought it was always about we haven’t quite figure it out what thinking really is

      • masterspace@lemmy.ca
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        edit-2
        2 days ago

        I mean, no, not really. We know what thinking is. It’s neurons firing in your brain in varying patterns.

        What we don’t know is the exact wiring of those neurons in our brain. So that’s the current challenge.

        But previously, we couldn’t even effectively simulate neurons firing in a brain, AI algorithms are called that because they effectively can simulate the way that neurons fire (just using silicon) and that makes them really good at all the fuzzy pattern matching problems that computers used to be really bad at.

        So now the challenge is figuring out the wiring of our brains, and/or figuring out a way of creating intelligence that doesn’t use the wiring of our brains. Both are entirely possible now that we can experiment and build and combine simulated neurons at ballpark the same scale as the human brain.

        • lowleveldata@programming.dev
          link
          fedilink
          arrow-up
          5
          arrow-down
          2
          ·
          2 days ago

          Aren’t you just saying the same thing? We know it has something to do with the neurons but couldn’t figure it out exactly how

          • masterspace@lemmy.ca
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            1
            ·
            edit-2
            1 day ago

            The distinction is that it’s not ‘something to do with neurons’, it’s ‘neurons firing and signalling each other’.

            Like, we know the exact mechanism by which thinking happens, we just don’t know the precise wiring pattern necessary to recreate the way that we think in particular.

            And previously, we couldn’t effectively simulate that mechanism with computer chips, now we can.

    • lunarul@lemmy.world
      link
      fedilink
      arrow-up
      9
      arrow-down
      2
      ·
      2 days ago

      There’s no technical reason to think we won’t in the next ~20-50 years

      Other than that nobody has any idea how to go about it? The things called “AI” today are not precursors to AGI. The search for strong AI is still nowhere close to any breakthroughs.

      • masterspace@lemmy.ca
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        4
        ·
        2 days ago

        Assuming that the path to AGI involves something akin to all the intelligence we see in nature (i.e. brains and neurons), then modern AI algorithms’ ability to simulate neurons using silicon and math is inarguably and objectively a precursor.

        • lunarul@lemmy.world
          link
          fedilink
          arrow-up
          2
          arrow-down
          2
          ·
          edit-2
          24 hours ago

          Machine learning, renamed “AI” with the LLM boom, does not simulate intelligence. It integrates feedback loops, which is kind of like learning and it uses a network of nodes which kind of look like neurons if you squint from a distance. These networks have been around for many decades, I’ve built a bunch myself in college, and they’re at their core just polynomial functions with a lot of parameters. Current technology allows very large networks and networks of networks, but it’s still not in any way similar to brains.

          There is separate research into simulating neurons and brains, but that is separate from machine learning.

          Also we don’t actually understand how our brains work at the level where we could copy them. We understand some things and have some educated guesses on others, but overall it’s pretty much a mistery still.