The AI Manhattan Project

🚀 Add to Chrome – It’s Free - YouTube Summarizer

Category: Tech Ethics

Tags: AIEthicsMilitarySiliconValleySurveillance

Entities: AmazonAndurilCIAGoogleICELlamaLoheed MartinMetaMicrosoftNSAOpenAIOraclePalantirParis MarksPentagonSilicon ValleyUS ArmyY Combinator

Building WordCloud ...

Summary

    Silicon Valley and Military Tech
    • Paris Marks discusses the shift in Silicon Valley towards military technology and defense contracts.
    • Companies like Meta, OpenAI, and Palantir are increasingly involved in military applications.
    • Tech companies have removed restrictions on using AI for military purposes.
    • AI is being used for violence and surveillance, raising ethical concerns.
    AI and Ethics
    • AI technologies are prone to biases and inaccuracies, yet are used in critical areas like military operations.
    • AI's role in violence includes creating extensive target lists and making autonomous decisions.
    • There is a lack of human oversight in AI-driven military operations, leading to potential ethical violations.
    Corporate Influence and Government
    • Tech companies are increasingly intertwined with government and military operations.
    • The economic strategy is focused on enriching Silicon Valley elites through defense contracts.
    • Political figures have financial interests in tech companies like Palantir, influencing policy decisions.
    Takeaways
    • The intersection of AI and military tech poses significant ethical challenges.
    • Increased military focus in Silicon Valley reflects a broader shift in tech priorities.
    • AI's lack of accuracy and human oversight can lead to severe consequences in military contexts.
    • Corporate-government partnerships in tech are reshaping economic and political landscapes.
    • Public resistance to tech-military collaboration is often suppressed.

    Transcript

    00:00

    We are joined now once again by friend of the show Paris Marks. >> Paris Marks.

    >> Paris Marks, author of The Road to Nowhere and also host of the podcast Tech Won't Save Us. >> Look, the system identified that this is

    00:16

    a potential target. And so, of course, the system must be right.

    And this justifies us targeting this apartment building or going after this person and their family or whatever, right? and nobody's actually going to go check it because, you know, whether it's accurate or not in that case, I would argue doesn't matter.

    00:31

    >> If the human being is in the loop, you will lose. You can have human supervision.

    You can you can watch over what the eye is doing. If you try to intervene, you're going to lose.

    >> Silicon Valley is looking a little different these days. If the 2010s was

    00:46

    corporate art, pride logos, and nap pods, now it's more like >> I I sometimes hate the enemies of pounder. I'm going to [ __ ] these people.

    I love the idea of getting a drone and having light fentinyl laced urine spraying on analysts who tried to screw

    01:01

    us. >> You would prefer the human race to endure, right?

    >> Uh, you're hesitating. >> Well, I Yes.

    >> I don't know. I I would >> People want to live in peace.

    They want to go home. They do not want to hear your woke pagan ideology.

    They want to know they're safe. And safe means that

    01:19

    the other person is scared. That's how you make someone safe.

    There's been a big shift in the valley. A shift in attitude, yes, as all these guys conveniently became Republicans, but more concerning, a shift in what they think the point of Silicon Valley

    01:36

    is. If you go back and think of like the earlier days of Google and you had this slogan of do no evil was the way that they kind of like represented themselves to the world, right?

    They wanted to look like they were a force for good by using this technology to like improve people's lives and stuff like that. Um, of

    01:52

    course, they threw off that slogan a number of years ago. Silicon Valley is going back to its roots and that's building the tech of the American empire.

    >> An entire generation of extraordinarily talented engineer in Silicon Valley will not hesitate to dedicate their working

    02:08

    lives to building, you know, online shopping algorithms, uh, the consumer apps on our phone. But when a US Marine asks for a better software system, they hesitate.

    02:25

    But first, it's ad read time cuz God knows we're not making any money from YouTube AdSense. Thank you, Google.

    Skip ahead to this timestamp if you hate me and don't want to support my work. This episode is sponsored by Aura, the only cool company on this cursed platform.

    Google your name real quick. Odds are

    02:42

    some of your personal information comes up, right? If I search my name, it's just my public social media profiles.

    That's because Aura is my digital watchdog. If they get even a whiff of my data being sold, traded, or exposed in a breach anywhere on the internet,

    02:57

    including the dark web, they track down the source and make sure they have a bad time. There are companies out there called data brokers whose whole purpose is to find and sell your data.

    They do it without your consent and they make billions selling it to marketers, scammers, and even stalkers. They are

    03:15

    the worst. Thanks to Aura, they're not my problem anymore.

    Aura forces them to remove my data, and they keep it removed. But that's just part of why I use and genuinely appreciate Aura.

    It's an all-in-one security tool. They

    03:30

    provide real-time fraud alerts for credit and banking, 24/7 identity theft monitoring, a secure VPN, anti virus, parental controls, password manager, credit checker, and a lot more. And if somehow anything does happen, Aura includes $5 million in identity theft

    03:46

    insurance. And they have US-based fraud experts on call 24/7.

    If you're anything like me, I'm sure you're sick and tired of these stupid, evil little companies stealing your personal information and using it to make a quick buck. That's why I use Aura.

    It keeps me and my

    04:02

    family safe online. If you're ready to protect your data, you can get two weeks absolutely free when you use my link.

    During those two weeks, you'll see exactly where your data is being leaked and who's doing it. Give it a try.

    I promise you'll appreciate the peace of mind. And now, back to the show.

    04:19

    This all starts with AI. Generative AI has been in our lives and in our faces for about 3 years now.

    >> Chat GBT. Chat GBT.

    >> Chat GPT. >> And we've always known it sucked.

    From stealing from artists and writers to getting people laid off to data centers

    04:35

    poisoning entire towns and, you know, the flood of Ghibli rips. AI really hasn't made anybody's life any better.

    Live memory can take your old photo and voila, turn it into a video. But it's worse now.

    In the last year, military

    04:50

    tech and military AI especially has become the hottest new thing in the valley. Everybody who's in AI and realizing they're in a bit of a bubble is looking for the best way to keep the air and cash coming in.

    And you don't need to look very far to find the biggest checkbook in the country. For

    05:07

    the first time in a while, tech is betting its survival on facilitating death and destruction openly. Last year, Y Combinator, the startup incubator behind things like Reddit and Door Dash, backed a weapons manufacturer for the first time ever, some company building

    05:22

    missiles for cheap. One of the biggest VCs in Silicon Valley is promising 500 million for whoever comes up with the best new defense tech.

    And most importantly, all the big players, Open AAI, Meta, Google, they're all getting into it, too. Recently, they've all

    05:38

    quietly deleted the bits in their usage policies that said their AI tools can't be used for military purposes and have turned a huge chunk of their resources to getting more defense contracts. Just to give you an idea, here's a bunch of AI CTOs and CRO's officially being sworn

    05:54

    in as lieutenant colonels in the US Army, pledging allegiance to the American military mission and enlisting. Enlisting themselves, sure, but really the companies they represent.

    And on the one hand, we shouldn't be that surprised.

    06:10

    Since the election, we've seen the tech industry get really cozy with the Trump admin and broing it up on Rogan. Since this is a government that loves war, these guys would follow along.

    But there's more to this than a new haircut and a dinky little chain. They're not

    06:27

    just playing the culture war game to get conservatives off their back. They want to get deep in the actual war game.

    Like take Meta. Meta's AI llama that I've never heard of anybody using, but whatever has cost the company billions of dollars in R&D.

    That's a big

    06:45

    investment into a thing that by and large has very little use to most people and that hardly anyone is paying for. Last year though, Meta released this announcement.

    We are pleased to confirm that we are also making Llama available to US government agencies, including those that are working on defense and

    07:02

    national security applications. The memo goes on to say that as well as the government itself, Llama would be going to the government's top military contractors like Loheed Martin and Anderil for whom they're making military VR glasses and companies that provide other services for the DO like Oracle

    07:19

    and Palanteer. And Meta aren't the only ones.

    OpenAI is kind of doing the same thing. making anti- drone AI with Anderil, writing a whole AI action plan for the US and knitting a closer relationship with the big boys by recruiting as many people as possible

    07:37

    from the military wing of the government. People from the DO, the CIA, the NSA, the Pentagon, and special ops all now work with Sam Altman.

    But the worst offender has to be Palanteer.

    07:52

    Their name's come up a few times in this video already. And if you've been paying attention to the news, a lot more this past year.

    Their stocks gone up like 400% since last fall. And their guys getting on TV like every other day.

    They're getting multi-million dollar

    08:08

    contracts with ICE and multi-billion dollar contracts with the Army. And unlike these other companies who've done most of this promilitary stuff in the background to try to keep up their whole making the world a better place and oh we just make chat bots facade.

    Palunteer has been much more explicit that killing

    08:25

    people and making them suffer is the whole point. >> We we're doing it.

    We're doing it. And I'm sure you're enjoying this as much as I am.

    Palunteer is here to disrupt and make our the institutions we partner with the very best in the world and when

    08:42

    it's necessary to scare our enemies and on occasion kill them. How like what does Palanteer actually do?

    Basically software that makes it easier to parse through a lot of data. If you're the police, the government, or the army, and you have data from different systems,

    08:58

    not well organized, or just mountains and mountains of stuff from decades of surveillance, it's usually not obvious what to do with all that. Palunteer's pitch is that their AI is there to help you see how everything connects and then automates decisions about what happens

    09:15

    next. In immigration, that means putting together profiles of every immigrant in the US with everything from border entry dates, visa status, home address, tax records, social media, known relationships, and past experiences with law enforcement, and then making a list

    09:30

    for ICE agents of where to go and when. In the case of war, it means doing pretty much the same thing by combining, say, spyware data with surveillance footage and facial recognition software, and then making a kill list.

    Then somebody or more and more often some

    09:46

    drone running on another AI model will go out and kill whoever has been identified. >> These weapons, you know, whether they're drones or guns or what have you should kill.

    Um and you know how much checking is going to be done. And you know there's like this isn't even like a

    10:02

    totally novel thing, right? The the New York Times reported several years ago that Israel had already planted um a a gun that was set up with an AI system in order to target an Iranian nuclear scientist so that no actual humans would be nearby and it didn't need to have

    10:18

    like a connection to, you know, someone who could remotely trigger it. Um it just knew it was designed to kind of wait for a particular vehicle or whatever to pass by and then and then to attack it to kill this nuclear scientist, right?

    And so these things are already out there.

    10:34

    >> The computer is not going to get tired. It's not scared.

    Um uh it's it's going to follow its rules. And that's a rather old example built around one person.

    If that gun was like a fishing rod, the current strategy is more like bottom trollling, just throwing a net down and

    10:50

    getting everything off the ocean floor. Take Israel's Where's Daddy software, an AI program that finds out when suspected fighters in Gaza go back home so they can be bombed in their sleep, which means killing far more bystanders than on the field.

    We don't know for certain

    11:06

    if that's built on Palunteer software, but the company is proudly involved with the Israeli genocide of Palestine. And just to give you an idea of how easy things like these make it to kill people, one person asked about the where's daddy thing by Lemon said it was

    11:24

    as simple as quote, "You enter hundreds of targets into the system and just wait to see who you can kill." Saying they were fine with the fact that this software is of course not particularly accurate and that civilian casualties are just collateral damage they're willing to accept because it's war. It's

    11:41

    worth reiterating that this is not a war. It's an extermination campaign.

    Now, they say the software is 90% accurate, but in a genocide where over 83% of deaths are civilians, we don't know how many of those came from some

    11:56

    AI. It's especially scary when you can see for yourself that AI is really prone to hallucinations, i.e.

    making stuff up, and is constantly reinforcing its own biases. It will never be an accurate technology.

    And that's kind of the

    12:13

    point. AI has given violence a new strategy both in war and domestic policing.

    The new meta is using tons of data to make a list of potential targets as long as possible so that the software looks powerful. Shoot or bomb or detain indiscriminately and then deal with all

    12:30

    the innocents caught in the crossfire after the fact. sometimes.

    Regardless, only when it's too late. And what's really terrifying is that there's not much we can do to stop it.

    AI for the purpose of violence is just so

    12:46

    lucrative. And this government is allin.

    Steven Miller personally owns between 100 and $250,000 worth of stock in Palunteer. And vice versa.

    Palanteer alums are everywhere in the Trump admin. And if the military

    13:01

    parades didn't give it away, it's clear that Trump's policy and economic strategy right now is making the American economy look good, even though nobody has a job by pumping all the cash the government has into making a few psychos in Silicon Valley richer than

    13:18

    God. Because ultimately they're concerned with their wealth and their power um above all else.

    And they know that the American government is wedded to that too. If they are doing well, if they are growing, then that means that sure there might be some domestic problems with how their technologies and their and their tools are rolling out.

    13:34

    But if they are a globally dominant force, that is great for the US government. This partnership is so strong and these guys are so powerful that whether you're coming at it from the private or the public angle, it looks like there's no real way to resist

    13:51

    anymore. You look at how, you know, even companies like Google, Microsoft, Amazon have long been selling services to um the the military, to the US government.

    In the past, they would face some degree of push back for that. And what we see now is that when workers speak up,

    14:06

    they're pretty immediately fired and silenced. Protesters are arrested and fired.

    People are detained and shipped abroad to torture cells. Men, women, and children are followed home and bombed.

    And behind it all, an entire

    14:22

    architecture of surveillance and analysis is being built by the most technically capable engineers on million-doll contracts working for billion-dollar companies backed by a trillion dollar government budget. It's hard not to see this as a system built

    14:39

    on eradicating all semblance of humanity. Not only are there no people supervising the AI, we are certainly not human anymore.

    Not to them. These systems are built to dehumanize us until it's all just data on spreadsheets.

    There's no humanity left in the targets

    14:56

    or in those carrying this out or even just a shred of basic human empathy in the people behind it all. From top to bottom, violence is now just robots executing data.

    Those in power get to have a clean conscience because machines don't make human mistakes. The rest of

    15:12

    us are stuck waiting for the day some killer robot comes our way. And we won't even be able to beg.