• ERROR: Earth.exe has crashed@lemmy.dbzer0.com
    link
    fedilink
    arrow-up
    11
    ·
    edit-2
    6 days ago

    When can rebel groups create an AI that analyzes behaviors to determine if someone is a cop or not? Governments use AI to analyze protesters by the way they walk, we should use AI against the oppressors.

    • brucethemoose@lemmy.world
      link
      fedilink
      arrow-up
      15
      ·
      edit-2
      6 days ago

      Right now.

      The hardest part is the dataset (aka labeled pictures of undercover cops). Give me some of those (A thousand? The more the better,) and I could train a small model for free, in a few days. Or a bigger more reliable one for a few bucks. I can explain specifics if you want.

      AI is not some mystery box like Altman would lead you to believe, it’s hackable and totally usable by regular people.

      • ameancow@lemmy.world
        link
        fedilink
        English
        arrow-up
        10
        ·
        edit-2
        6 days ago

        The alignment of AI isn’t the issue, there will be models that will serve any segment of any population.

        The skepticism should be firmly placed where it belongs, the exploitative corporations that are creating crippled, hobbled, worthless models that are designed to sell products and make dim people feel good about themselves and to help highschool students write papers. Current consumer AI products are fun for about 4 hours until you get sick of being lied to or making pictures of “Lola Bunny but real”.

        We won’t see an AI system that can reliably serve your actual needs for maybe decades. This is because they know (or thought) that people would buy anything with “AI” slapped on it and not have to actually create working systems that can do things like basic mathematics and contextual awareness.

        Most likely people in china are going to be walking around with AI in their fucking sunglasses that will whisper in their ear whatever they want to know long before we get any cop-sniffers in the US.

        • brucethemoose@lemmy.world
          link
          fedilink
          arrow-up
          4
          ·
          edit-2
          6 days ago

          It’s not a secret, tons of open source models are locally runnable.

          Give me more pictures like this, and a few days, and I can teach a vision model to sniff out cops on my 3090. Or with like $40 in cloud costs.

        • ameancow@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          6 days ago

          China just demonstrated that they can create models comparable with US’s “juggernauts” like Chat GTP 4.o, but small enough to fit into your own devices that you own. There is nothing secret about how a LLM works, it’s just challenging because the “AI magic” doesn’t emerge until you have a sufficiently large enough database and then it basically programs itself. (Criminally simplified.)

          But there are tricks you can do to make those databases smaller and more efficient and still retain the complex neural networks that make the things work. When that day comes (soon) everyone will be able to literally have their own LLM’s in-device that won’t be attached to entities like Google or Open AI.

          Disclaimer: I am a major AI skeptic and despise the current state of the tech, so I am giving my own grains of salt to take all this with.

          • brucethemoose@lemmy.world
            link
            fedilink
            arrow-up
            1
            ·
            edit-2
            6 days ago

            I would contest this and say that Europe (Mistral) and other US companies (like Meta’s Llama series that seeded everything happening in China now) were chasing ChatGPT very closely before Deepseek/Alibaba. Even S Korea (LG Exaone) and many smaller companies are putting up competition, often building on international work.

            Also, locally runnable Deepseek is nothing like GPT4. The 32B is smart, but it just doesn’t have the world knowledge the 671B model has, which is not practical to run locally.

            …Sorry for being so nitpicky, as I agree with the sentiment.