The dominance of Wintel (see our previous article) standardized the computer industry and set the stage for powerful internet platforms such as Google, Amazon, and Facebook to emerge. The power of these businesses to leverage the ecosystem built by Microsoft, Apple, and the telecom industry propelled them with growth unlike anything else in history. With a combined market value of $7.5 trillion, we can see that the dotcom bubble was merely a brief speculative fever ahead of the steady rise of corporations that now dominate our daily lives.
As the digital revolution reached its inflection point, data became the new oil. This data powers modern society, a quantification of the real world that exists as digital information inside computers and networks. It is used to create artificial intelligence. What we see as digital photos, posts, or documents are complex collections of numbers that can be mathematically analyzed. For companies whose business models are built on refining and selling consumer data, irresponsibly extracting this data has led to a dangerous erosion of privacy that is this new era’s equivalent of an ecological crisis.
The next phase of growth will come from innovations focused on industry and enterprise. This opportunity, often referred to as the “internet of everything”, is much larger than the pool of consumer data. How will the world's information be organized and who will have access to it? Will platforms for data be centralized at the hyperscale or decentralized in a hybrid cloud? Will "surveillance capitalism" breed a new privacy economy? Will such protocols be universal or discrete? Who will control these platforms and how?
I would say that one way to frame the challenge of AI, is that it's obviously a new technology. It can be a powerful weapon, but there also are ways in which it has sort of a totalitarian vibe.
Longstanding tensions over data and privacy have escalated into a 'corporate war', dividing companies into ideological factions that reflect their business models. Apple is rapidly deploying new privacy features as Google consolidates its reach. Palantir is offering a platform that puts the power of data analysis into the hands of companies and government organizations, effectively society at large, while questioning the future of the few in Silicon Valley that hold it today. Palantir’s partnership with IBM and integration with AWS are strong indications that a powerful coalition is forming. This coalition will challenge the ambitions of Google and Facebook, which have built surveillance capabilities that not even the U.S. government is allowed to have.
The roots of this conflict stretch deep into the early days of the internet. As we will discuss, Google is arguably a DARPA project that ran out of control. Google is racing towards development of a quantum computer. At scale, this is a device which would make Google the most powerful entity in history. It faces competition from many, potentially including a coalition of IBM quantum hardware and Palantir’s data platforms.
“This is going to be the decade in which quantum really comes of age,” an IBM executive recently told the Wall Street Journal. Quantum computing is to AI what nuclear weapons are to bombs. This quantum race is the context to the current debate about privacy.
This is not the story of ambitious young entrepreneurs. This is the story of how private versions of some of the most advanced defense projects ever conceived were built by a company that has sought to influence US politics, unilaterally engaged in clandestine activities, and developed its own foreign policy… all in the pursuit of creating artificial intelligence. It is a complex matrix of competing objectives, technologies, and business models that investors will need to carefully consider in order to navigate the 2020s.
This is a power struggle that will have significant consequences. The most powerful companies in the world are fighting for control over the world’s most important asset: Data. If the computer is the greatest tool mankind has ever created, then the outcome of this corporate conflict will inevitably define the future of humanity.
To understand where things are headed, we must first have a deep understanding of how far things have gone. Why have Palantir, Apple, Microsoft, and others become so openly critical of Google and Facebook? Journalists, George Soros, and activist groups have all painted Palantir and government agencies as the bad guys, but is this true? Here we will challenge such assumptions.
Google was founded to collect as much data and information as possible, and to use that data to build artificial intelligence. Today, nearly everything we do leaves a traceable digital footprint. At the turn of the millennium, mass amounts of information had not yet been digitized, let alone centralized in hyperscale cloud computing centers.
The search engine used to browse this information was merely a by-product of this effort. Google co-founder Larry Page grinned as he admitted this in 2002: “Artificial intelligence would be the ultimate version of Google so if we had the ultimate search engine it would understand everything on the web… … we have enough to space store like 100 copies of the whole web.” To create such an intelligence, the co-founders of Google planned to use immense computing power to train algorithms on the world’s data, what we might think of as HPC cloud applications today. This earned Google’s search engine the nickname “God’s Brain”.
Google was just one of a series of projects, often classified, that were backed by the NSA and DARPA in the 1990s. As PhD students at Stanford, the work of Google’s founders was overseen by two DoD officials and received funding from DARPA indirectly through a federal initiative.
Projects internally pursued by DARPA were consolidated into the Information Awareness Office in 2002. One such project, the Total Information Awareness Program, planned to use predictive modeling and data mining to spot terrorists. Google uses similar techniques to identify consumers interested in a particular product. These were very new ideas at the time. The IAO was later defunded over fears that it was too Orwellian and could lead to mass surveillance of US citizens, transferring parts of TIA to the NSA.
(Total Information Awareness, Image source: DARPA)
The DoD’s early involvement with Google isn’t necessarily surprising or nefarious. The internet itself has its roots in a DARPA project known as ARPANET. The projects pursued by the Total Information Awareness Program were just the latest and most sophisticated in a long history of surveillance programs such as FAIRVIEW. In the mid 1990s, the infrastructure behind a mass surveillance and data sharing program known as UKUSA was bigger than the internet itself. Companies such as AT&T had a decades-long history of deep cooperation with the NSA. Even the 1992 Clinton Campaign featured its own amateur version of ECHELON, tapping into satellite links to capture off-air feeds. This enabled the campaign to prepare responses before news coverage aired, in one instance intercepting an attack ad 36 hours before it was broadcast to the general public.
By the time DARPA was consolidating programs into the Information Awareness Office, Google had launched a massively ambitious project to scan every book in the world, giving the company more data to train its artificial intelligence algorithms. Google struck secret agreements with libraries and stated publicly that the project was centered around rare and out-of-circulation books. In reality the program scanned all books, hoping that the sheer scale and audacity would enable the company to stand on a weak “fair use” loophole in the copyright law, taking advantage of the fact that few would understand that the true intention of the project was to create AI products that could later be commercialized.
As we will see, this pattern of ‘scale and see if you can get away with it’ behavior is a reoccurring theme in the company’s history. When later confronted about mounting lawsuits over Google’s library scanning project, Sheryl Sandberg, then a Google VP, told reporters that “maybe we didn’t realize some people were scared.” In 2004, 31 organizations also urged the suspension of G-Mail over privacy concerns.
Imagine your brain being augmented by Google. For example, you think about something and your mobile phone could whisper the answer into your ear.
-Google co-founder Larry Page, 2004
While Google was scanning books for the benign sounding “Project Ocean”, two other technology companies emerged that also resembled elements of DARPA and NSA projects. Facebook was famously founded by a Harvard undergrad student, but closely resembles another DARPA project known as Lifelog, emerging just months after the program was cancelled. Palantir was spun out of PayPal in 2003, and became deeply embedded in the DoD activities in Iraq and Afghanistan. Today, it incorporates elements from a number of advanced projects that will be discussed in detail later.
At the same time that the NSA began scaling DARPA's vision for “total information awareness” with programs such as Stellarwind (2004), Topsail (2005), and PRISM (2007) , Google used surveillance equipment onboard vehicles used for Street View and Google Maps to tap and collect data from unsecured WiFi networks in 30 countries. Google publicly stated that this data was collected by mistake, but an investigation revealed it was intentional. Google was later fined for stonewalling the FCC’s investigation. The privacy concerns surrounding Google Maps are another example where scale trumped controversy.
I actually think most people don't want Google to answer their questions. They want Google to tell them what they should be doing next.
-Eric Schmidt, CEO/Executive Chairman 2001-2018 (8/4/2010)
But Google’s expanding ambitions stood in contrast to its stock price which peaked in 2007 and did not fully recover until 2012. It took time for the market to fully realize the significance of Google’s purchase of the Android smartphone operating system in 2005. Schmidt even served on the board of Apple between 2006 and 2009. His departure foreshadowed divergent views amongst Silicon Valley leaders.
As the internet grew, Google’s ability to harvest mass amounts of data on consumers, analyze their habits, then micro-target them with ads, made the company both incredibly profitable and politically powerful. Google’s then CEO Eric Schmidt became deeply embedded in US politics, spending the day of the 2008 election inside of the Obama campaign’s “war room”. In 2010, Schmidt co-authored a paper for the Council on Foreign Relations on “connectivity and the diffusion of power”. The concepts in this paper were expanded on in a book titled The New Digital Age, which received endorsements from Henry Kissinger, General Michael Hayden, Tony Blair, Mohamed El-Erian, and Elon Musk. Musk later inferred that Google is the only company he is afraid of.
This book could be thought of as Google’s foreign policy declaration, given that it discusses the creation of “virtual statehood”. It advocates for the centralization of one’s life into a “system of information management and decision making”, presumably Google, and suggests that some governments will enforce internet identity policies similar to the real-name policy of Google+.
At times it reads like an instruction manual for creating the world Edward Snowden wanted to avoid. In a section titled “Privacy Revisited”, Schmidt declares that:
There will be a record of all activity and associations online, and everything added to the Internet will become part of repository of permanent information… …People will be held responsible for their virtual associations, past and present.
People have a responsibility as consumers and individuals to read a company’s policies and positions on privacy and security before they willingly share information. As the proliferation of companies continues, citizens will have more options and thus due diligence will be more important than ever.
The next section of the book suggests “coping strategies” including “sealing virtual juvenile records”. Why not just delete them?
What’s more shocking is what followed after the book’s publication. Co-authoring both the CFR paper and book was Jared Cohen, head of a Google/Alphabet division that is now known as Jigsaw. Cohen is a highly political figure, having worked closely with Hillary Clinton. Jigsaw could be thought of as Google’s paramilitary operation, as Wikileaks revealed that it unilaterally intervened in the Syrian civil war, partnering Al-Jazeera for psy-op campaign.
Google’s direct involvement in military conflict raises the suspicion that it intended to use warzones in the Middle East as a testing ground for the kinds of ideas described in The New Digital Age, as the book also contains entire chapters on both “the future of revolution” and “the future of reconstruction”. Indeed, Schmidt and Cohen made "secret visits" to Iraq, according to CNBC. Today, Jigsaw is focused on countering white supremacy and domestic terrorism. This is particularly alarming with the current rhetoric out of Washington. Jigsaw’s interests are remarkably aligned with calls for a new war on domestic terrorism, just months after the Patriot Act expired.
At the turn of the millennium, the internet was highly decentralized, relying on networks of servers connected by ISPs. By the end of the decade, much of the internet infrastructure had consolidated amongst the “big tech” companies. Their "hyperscale" cloud datacenters dwarfed the ISPs. Google and Facebook were able to leverage this grip on the infrastructure of the internet to create mass surveillance enterprises that put the Orwellian powers of programs like Total Information Awareness and LifeLog into the hands of private corporations.
Today, these massive constellations of computers directly control much of society, but there are important nuances. For companies such as Apple, Amazon, and Microsoft, “the user” is the customer. Customers buy hardware from Apple. They buy software from Microsoft. Amazon provides services; it’s a platform for digital commerce. For Facebook and Google, the relationship is inverted; the user is the product. They receive access to free software and services, in exchange for allowing these companies to create highly detailed psychographic profiles for advertising purposes, and bulk data collection for purposes of developing AI. Google began crawling through Facebook as far back as 2007.
Wyden: Does the NSA collect any type of data at all on millions, or hundreds of millions of Americans?
Clapper: No, sir.
Wyden: It does not?
Clapper: Not wittingly. There are cases where they could inadvertently, perhaps, collect, but not wittingly.
The following year, the Director of National Intelligence lied about NSA activities, while under oath, to a congressional committee. This prompted an NSA contractor, Edward Snowden, to leak an archive of top-secret NSA documents to a group of reporters at The Guardian. These documents detailed how projects and programs originally part of DARPA’s Total Information Awareness and the Information Awareness Office were secretly continued and expanded by moving them to the NSA. The NSA had been conducting mass surveillance on American citizens, and “actively mislead” Congress about such activities.
I argued it was unethical, illegal, and unconstitutional.
—Diane Roark (House Intelligence Committee Staff, 1985-2002)
The response to the backlash from the tech community was a summit at the White House the following Spring. The meeting included Eric Schmidt of Google, Mark Zuckerberg of Facebook, and Alex Karp of Palantir. “People around the globe deserve to know that their information is secure” said Mark Zuckerberg, who opened up Facebook so that Google could crawl through posts and index user profiles.
These events only emboldened Google further. Shortly after the meeting, Google unified its services. This meant that data collected on users of one particular Google product or service would no longer be siloed, it would be shared across the entire company. Your YouTube history was now being matched up with your Google searches. Signing into one service was the same as signing into them all.
At the same time that Apple was fighting with the FBI over the lack of a backdoor into the iPhone, Google dropped one of the last privacy protections, officially removing a policy that kept personally identifiable web history separate from other data. A year later, it was discovered that Google could track cell phones even when location services are turned off. A study by a professor at Vanderbilt found Google’s phones transmit significantly more data back to Google servers, and much more frequently, when compared to Apple's iPhone. Google's Android phone made a request to communicate with Google servers every 1.5 minutes.
(Image source: Professor Douglas C. Schmidt, Vanderbilt University)
The escalation continued. Google began using Chrome to scan files on users’ computers. Google cut a “secret deal” with Mastercard to purchase credit card data (that could be combined with location data). Google tracked receipts received through Gmail. Google bought Fitbit and got access to health records under a secret program codenamed "Nightingale". A secret microphone was found inside Nest thermostats, owned by Google.
Google even began using Bluetooth homing beacons placed inside stores. If all of this wasn’t enough data, Google offered users $20/month to track them with even more granular precision. Predictive models built from such data could be easily applied later to individuals who did not participate, by identifying statistical similarities in the data that is collected. Even Pokémon GO was conceived inside of Google as a “shadow game”, designed to herd users to businesses using “lure modules”.
It’s not hard to see how Google became so powerful. All of this intimate data was traded in exchange for a web browser, a map, an email client, a few office apps, a video-sharing site, and a search engine. At least the NSA offers national defense, has government oversight, and is subject to the Foreign Intelligence Surveillance Act of 1978.
Google effectively created a privatized version of TIA and refused to give the government access to it. This is particularly concerning considering that Google recently hosted an Obama Foundation Fellow who argued that the US government is controlled by a bunch of elite old men, that are colluding to design a system that enables them to rule society. Furthermore, that we are going to put in place a new democratic system that has a “collaborative” and “liquid” design, one that people "actually choose".
(The NSA hacks Google, Image source: Edward Snowden)
For the first time in history, a company that controlled so much of US telecommunications infrastructure was refusing to cooperate with government monitoring of such channels. This was a precedent that stretched back to Western Union’s cooperation with the “Black Chamber”, giving intelligence officials access to telegraph cables. While Microsoft was supportive of the NSA, giving pre-encryption access, Google engineers were furious to learn that the NSA had hacked into Google’s internet infrastructure.
Google not only refused to give the NSA access, but a group of senior engineers known as the “Group of Nine” even refused to adapt Google’s cloud infrastructure for basic use by the DoD. These facts are in stark contrast to Google's involvement in US politics and its influence over the US government.
After helping the Obama campaign create a “complex model of the electorate” during the 2012 election, Google’s Eric Schmidt went on to fund no less than three startups aimed at using consumer and voter data to influence elections, all of which were run by former Obama staffers. This included Groundwork, which was described as “Salesforce.com for politics”. Groundwork’s only “political” client was the Clinton campaign, and the company included former employees from Google. Clinton attended an event hosted by Google just days after Groundwork was incorporated.
…Groundwork has been tasked with building the technological infrastructure to ingest massive amounts of information about voters, and develop tools that will help the campaign target them for fundraising, advertising, outreach, and get-out-the-vote efforts—essentially to create a political version of a customer relationship management (CRM) system, like the one that Salesforce.com runs for commerce, but for prospective voters.
When Donald Trump won the 2016 election, Facebook and Google were quick to condemn Cambridge Analytica—the Trump campaign’s equivalent of Groundwork.
“That’s against the policies. You can’t share data in a way that people don’t know or don’t consent to.”
-Mark Zuckerberg, CNN interview (3/21/2018)
In the wake of the Cambridge Analytica scandal, tensions began to simmer. When asked what he would do, Apple CEO Tim Cook told MSNBC that he “wouldn’t be in this situation” if he was running Facebook. Facebook CEO Mark Zuckerberg responded by implying that Apple’s products are just for "rich people”. Apple escalated things further by rolling out a new feature to limit time spent on certain apps, as Instagram’s app appeared on a screen behind the presenter.
Google’s current CEO Sundar Pichai refused a request to attend a senate inquiry into the role that technology platforms played in the 2016 election. Rarely discussed is the fact that conservative media has been thriving on platforms such as Twitter, YouTube, and Facebook.
Media Matters reported that in 2020 posts from right-leaning pages were strongly more popular than left-leaning pages. Right-leaning pages had the “highest interaction rate”, and 4 out of the top 5 political pages were right-leaning. Left-leaning pages accounted for just 16% of total posts from political pages. The report concluded that the gap widened as engagement increased during the Summer 2020 protests.
This is what made the dramatic shift towards censorship so alarming. Google’s Schmidt once said that that government censorship would end in a decade. Seven years later, Twitter, YouTube, and Facebook banned a sitting US president among many others. Facebook gave selfies and private messages from the Capitol Hill riot to the FBI. Experts called the riot at the Capitol the beginning of “an American insurgency”. Perhaps a war on domestic terrorism would call for new emergency surveillance powers. Note that this was power was wielded selectively; Facebook did not extend assistance to law enforcement when rioters burned courthouses and destroyed federal monuments in the Summer of 2020.
Facebook and Google have amassed enormous wealth and power by monetizing intimate personal information about consumers. Tim Cook called it the “data industrial complex”. Flexing this power during and after the 2020 election season ignited the simmering tensions. It started with Apple testing new privacy features that would require apps to ask for permissions to track users across apps and websites owner by other companies. Users would have to opt-in rather than opt-out. This was widely seen as Apple setting and enforcing a privacy standard.
This pending iOS update is an existential threat to businesses built on collecting data for marketing purposes. Facebook responded by taking out full-page newspaper ads accusing Apple of trying to hurt small businesses. Facebook warned analysts that this could curb revenue, while Bloomberg reported that it may file a lawsuit against Apple. “We increasingly see Apple as one of our biggest competitors,” said Zuckerberg on Facebook’s most recent earnings call.
Tim Cook shot back by accusing companies of “data exploitation” and criticized companies’ algorithms for “perpetuating the spread of disinformation and conspiracy theories for the sake of user engagement”, as a reporter at Bloomberg put it. We wrote about this extensively in a blog post on Palantir. He also made an appearance on Fox News Sunday to explain why Apple removed Parler from the App store, calming the nerves of anxious conservatives.
Technology does not need vast troves of personal data, stitched together across dozens of websites and apps, in order to succeed. Advertising existed and thrived for decades without it. And we’re here today because the path of least resistance is rarely the path of wisdom.
-Tim Cook (1/28/21)
Google announced that it would ban third party cookies. In the wake of Google’s degradation of privacy, many “data aggregators” and “data brokers” followed in Google’s footsteps, buying consumers’ data from apps, websites, and banks. Among others, this data is sold to hedge funds, including 2Sigma, a haven for former Google executives. Banning third party cookies will enable Google to consolidate its power over the ad industry, leveraging Google Chrome to make it harder for anyone else to track users. Many smaller competitors will be wiped out. Third party cookies will not be allowed, but Google will still own the browser and thus have access to your browsing history. Google also owns DNS routers that direct internet traffic.
Then Google and Facebook both attempted to “bully” Australia over a proposal that would require them to compensate Australian newspapers. When Google threatened to suspend its services in Australia, Microsoft’s CEO Satya Nadella reached out to Australia’s prime minister, offering alternatives to Google’s services.
In an interview, Nadella also attacked platforms such as Twitter, YouTube, and Facebook:
Unilateral action by individual companies in democracies like ours is just not long-term stable—we do need to be able to have a framework of laws and norms.
Big by itself is not bad, but competition is good. And more importantly, you need to have a business model that is really aligned with the world doing well. There are certain categories of products where the unintended consequences of the growth on that category or lack of competition creates issues.
-Satya Nadella (2/20/21)
Even Elon Musk weighed in, encouraging his nearly 50M followers on Twitter to “use Signal”. This was essentially telling them to ditch Facebook’s WhatsApp. Signal was created by Brian Acton, the co-founder of WhatsApp (bought by Facebook in 2014), after he felt that Facebook was eroding too many of WhatsApp’s original privacy protections.
When popular messaging app Discord removed WallStreetBets, Musk took to Twitter to say that “even Discord has gone corpo”, a reference to a futuristic video game where large tech corporations engage in covert conflicts with one another in an anarcho-capitalist state.
If we acknowledge that companies like Apple and Amazon are the blue chips of this new era, then it must also hold true that the consumer internet is maturing. In almost every major category, these firms are going toe-to-toe. YouTube/Prime/Apple TV… WhatsApp/iMessage… Apple/Android… ChromeBook/iMac… AWS/Azure/Google Cloud… Gmail/Outlook… iWork/Google Docs/Office365… Tesla/Waymo… and most importantly, OpenAI/DeepMind…
The biggest growth opportunities are in the industrial internet. 5G, a new generation of wireless telecommunications technology, will generate significant amounts of IoT data. This will be augmented by a string of breakthroughs in semiconductor architecture, enabling chips that are much more efficient at AI tasks. Together these will open up many new industrial, enterprise, and business applications.
Think of Palantir’s platform as a sort of XKeyscore for enterprise. In order to make full use of these technologies, a platform that coordinates data collaboration inside of a firm and between independent firms will be required. Data can be shared and leveraged in a way that creates new levels of efficiency. Think of retailers sharing data with logistics partners in a way that decreases unused space on freight loads, or a supplier sharing data with a downstream manufacturer to optimize production schedules. Other data should not be shared, and perhaps tightly controlled.
Data gradually will move to the center of how companies are run. This is what Palantir CEO Alex Karp means when he says Palantir’s corporate customers are "the 125 most interesting institutions in the world". Each new customer is planting a seed that may grow into the structure of an entire industry.
This is not a new concept. Palantir shares aspects of DARPA projects such as Deep Green, a battlefield decision-making support system. This included components such as “Crystal Ball”, designed to analyze possible future outcomes on the battlefield. HART was a program designed to automate drone patrols and integrate the data collected with troops on the ground. The DoD also worked on projects to bring more data connectivity to the battlefield, such as the TCOM “Blue Devil” airship, which could be thought of as the military equivalent of 5G access.
In the future, businesses will operate in a similar way. This raises concerns and creates new challenges. Years of work for the DoD acted as stealthy research and development for a software platform that Palantir is now scaling with enterprise applications. Though they may both have their roots in DoD projects, Palantir is the anti-Google. Unlike companies that hoover up and hoard as much data as possible (then either use it for their own purposes and/or sell it to third parties), Palantir does not transfer client data for its own purposes.
Unlike many tech companies, our business model is not based on the monetisation of personal data. We do not collect, store, or sell personal data. We don’t use personal data to train proprietary AI or machine learning models to share or resell to other customers. We never facilitate the movement of data between clients, except where those specific clients have entered into an agreement with each other.
This is not to say that Palantir does not and will not incorporate AI. On the horizon are powerful “general purpose” AI tools, such as OpenAI’s GPT3, that can (and likely will) be incorporated into platforms like Palantir in powerful ways. OpenAI was launched with assistance from Elon Musk and Palantir’s Chairman, Peter Thiel. Microsoft is one of OpenAI’s key partners. A recently announced partnership with IBM will integrate IBM Watson into Palantir.
Palantir enables customers to harness the power of data and analytics, and does so while giving them control over how their data his handled, while providing protocols to ensure that the data is handled ethically. This means returning sovereignty and control back to the parties involved: “these organizations define what can and cannot be done with their data”.
This is particularly important when data needs to be shared to harness its full potential. One of Palantir’s biggest advantages is that it didn’t try to take over the world. While Google has faced repeated fines and criticism in Europe, Palantir joined the EU’s GAIA-X project as a “Day-1 member”. GAIA-X is described as “an open, transparent digital ecosystem, where data and services can be made available, collated and shared in an environment of trust.”
In other words, Europe is building an open and public version of Google, and Palantir is providing software infrastructure that will make that possible: Fulfilling Google’s mission of organizing the world’s information without putting it in the hands of Google executives. A decentralized version of Total Information Awareness. Microsoft has also said it is interested in participating.
It is rumored that Palantir will select a location outside of Zurich, Switzerland, as its European headquarters.
Our society has effectively outsourced the building of software that makes our world possible to a small group of engineers in an isolated corner of the country. The question is whether we also want to outsource the adjudication of some of the most consequential moral and philosophical questions of our time.
The engineering elite of Silicon Valley may know more than most about building software. But they do not know more about how society should be organized or what justice requires.
Our company was founded in Silicon Valley. But we seem to share fewer and fewer of the technology sector’s values and commitments.
From the start, we have repeatedly turned down opportunities to sell, collect, or mine data. Other technology companies, including some of the largest in the world, have built their entire businesses on doing just that.
Software projects with our nation’s defense and intelligence agencies, whose missions are to keep us safe, have become controversial, while companies built on advertising dollars are commonplace. For many consumer internet companies, our thoughts and inclinations, behaviors and browsing habits, are the product for sale. The slogans and marketing of many of the Valley’s largest technology firms attempt to obscure this simple fact.
The world’s largest consumer internet companies have never had greater access to the most intimate aspects of our lives. And the advance of their technologies has outpaced the development of the forms of political control that are capable of governing their use.
-Alex Karp, Letter to Shareholders (Palantir S-1)
Palantir has been one of the biggest critics of the data industrial complex. When Google refused to work with the US government, while simultaneously announcing it would open an artificial intelligence lab in Beijing, Palantir’s chairman called the move “seemingly treasonous”, and urged the US government to probe Google.
Google also partnered with the Chinese Communist Party to host an event showcasing the power of Google’s artificial intelligence. This event was held in Wuzhen, the site of the World Internet Conference, an event that once denied entry to US media outlets (such as the New York Times) and drew scorn from Amnesty International. Google is ideologically aligned with the CCP, given that they both share a vision for a single platform that controls all data, Google’s stated mission. China is using this data to create “social credit scores” for its citizens.
Palantir’s CEO used this as a basis for an attack against Google in an interview with CNBC at Davos 2020:
Military AI will determine our lives, the lives of your kids. This is a zero-sum thing. The country with the most important AI, most powerful AI, will determine the rules. That country should be either us or a Western country.
That doesn't mean you're anti-our-adversaries. It just means would you rather have them with the equivalent of tech nuclear arms or us?
If a company decides not to work with the US government on this, I think we all need to understand why and they need to guarantee they're not implicitly or tacitly transmitting to others.
This program will quite literally determine who is standing here [at Davos] and what they're saying in five years.
He continued, criticizing virtual statehood:
…And the consumer internet companies, this is not Apple but the other ones, have basically decided we're living on an island. And the island is so far removed from what's called the United States in every way, culturally, linguistically, in the normative ways, that we'd rather be regulated as a foreign island than be part of the United States proper…
…You cannot create an island called Palo Alto Island that is only subject to regulation much like a canton system. What Silicon Valley really wants is the canton of Palo Alto. We don't have a cantonal system in America. We have United States of America, not the United States of a Canton, one of which is Palo Alto. That must change.
This comment was particularly prophetic. One year later Nevada proposed new legislation that would allow tech companies to form their own governments, with authority to impose taxes, form courts, and provide government services such as police. This came months after Google announced it would invest $1.8B in Nevada. Google has a subsidiary, Sidewalk Labs, dedicated to building a city “from the internet up”. Such a project might provide a concrete legal framework for “virtual statehood”.
The stakes are very high. To create the AI and ML tools of tomorrow, a powerful platform for data is needed. For as long as Google manages the world's information, it will have the upper hand in developing AI. The current CEO of Google, Sundar Pichai, has said that the impact of AI will be more profound than man’s discovery of fire.
For Palantir, this moment is the culmination of a nearly two-decade gamble that required extraordinary patience and commitment for Palantir’s investors. While engineers at the Googleplex were building the Google advertising machine, Palantir’s engineers were focused on creating a class of software that was well ahead of its time, creating tools to analyze and interface with highly unstructured data in a highly dynamic environment. They now seem to be unleashing it on the world.
Palantir’s software engineers often developed software for use on the ground in Iraq and Afghanistan. If Palantir’s software is robust enough to meet the demands and challenges of operating in a battlefield, imagine the possibilities of operating in a stable peacetime setting. It would be impossible for anyone else to replicate this expertise and experience.
Palantir’s recent partnership with IBM is a sign that Palantir’s ideas and technologies are beginning to scale in a big way. This partnership gives IBM the software that it needs to be competitive in tomorrow’s industrial internet. For Palantir, it gives the company access to IBM’s cloud infrastructure, as well as IBM’s sales force. The collaboration will focus on AI applications, leveraging IBM’s Watson. We are of the opinion that Palantir’s revenue guidance may turn out to be conservative in hindsight.
One of the most powerful data use-cases is in the healthcare sector. There is a vast array of possibilities in areas such as medicine, virology, genomics, and fitness. Data can create new insights that would have been impossible to detect before, but this data is highly sensitive and confidential. How this data is handled raises complex ethical questions. Here we will examine two approaches for deciding who gets access to what data.
Google has gained access to such data in a number of ways. For example, the CEO of 23andMe is the ex-wife of Google co-founder Sergey Brin. Though there is no official partnership, Google was a seed investor in the company.
Internally, Google has “Project Nightingale”, presumably a reference to Florence Nightingale. Under this project, Google has worked with Ascension to gain access to tens of millions of patient records, from Ascension’s 2,600 hospitals in 21 states. Doctors and patients were not notified that 150 Google employees were given access to their data.
In the UK, a hospital trust gave Google’s DeepMind access to the personal data of 1.6M patients. Facebook’s "Building 8" division also secretly approached several major US hospitals seeking access to sensitive patient data, a project that was abandoned once it became known publicly.
Google also purchased Fitbit, giving unfettered access to the real-time biometric data of Fitbit users. This can be easily matched up with the data Google already has. Does your heart beat a little faster when you watch that particular YouTube video? Imagine the biometric response data that can be collected from Facebook's Oculus.
Palantir has taken a very different approach, partnering with government organizations, empowering them with Palantir’s software platform so that they can do their own research and analysis. This comes with a robust suite of tools, including tools that track who gains access to what data, when, and why.
Though these contracts have been in the context of Covid, they are likely to scale over time. Palantir has received contracts from NHS and HHS. We hope we’ve adequately explained why the controversy surrounding such contracts is suspiciously lacking in perspective. Such projects provide a software platform for these agencies to analyze their own data, they do not turn it over to Palantir.
What were once considered unthinkable tools of government mass surveillance are now in the hands of private corporations that have used calibrated psychographic profiles of users to addict them with calculated dopamine hits, for the purpose of selling them more ads with increased engagement.
These platforms are being used for artificial intelligence projects that have been described by experts (including Elon Musk) as being more dangerous than nuclear weapons. Google and IBM are both racing towards a new type of computer, a quantum computer, that will bring the computational hardware required to match and exceed the human brain. This will require the computation of models 1,000x larger than OpenAI’s massive GPT3, or 100x larger than Google’s recent 1T parameter leviathan.
While quantum technology will take years to scale, the race is on to build the software infrastructure today. Early successes with D-Wave's quantum annealer (Google is a customer) are a strong indication that we are nearing practical applications. This will make the mounting antitrust case against Google a critical junction in history. If (in the words of Ray Dalio) the Federal Reserve’s printing press is the world’s most valuable asset, the world’s first scaled-up quantum computer and/or generally intelligent AI would be a close second.
Quantum computing is to conventional computing what a warp drive is to a bicycle.
—Jeremy O’Brien, CEO of PsiQuantum (2/3/2021)
The race to such technology will be a defining theme of the decade to come. This is no secret. In 2015, Google restructured its management and changed the company name to Alphabet to reflect this “Alpha-bet” on AI. IBM has the goal of creating a 1,000-qubit quantum computer by 2023. If successful, such a machine might give clients on the Palantir platform the capability to run optimizations and other functions that are simply not possible on today's hardware.
But to think that American companies are alone in this race would be a US-centric view. The Chinese government is funding projects in areas like quantum computing, where they have had major successes. Some estimates peg China with more 5G subscribers than the entire US population. There are increasingly vocal calls for the US government to step in and provide funding to the technology sector. Eric Schmidt recently warned that allowing China to overtake the U.S. in AI would be a “national emergency”.
Schmidt’s comments reflect a recent publication from the Council on Foreign Relations. In The Innovation Wars, two employees from the CIA’s In-Q-Tel argue for more government funded research, noting that China’s “military-civil fusion” has enabled it to evolve from an imitator to a pioneer. China is already home to the world’s most valuable AI company, SenseTime. In their words:
China’s push for technological supremacy is not simply aimed at gaining a battlefield advantage; Beijing is changing the battlefield itself. Although commercial technologies such as 5G, artificial intelligence, quantum computing, and biotechnology will undoubtedly have military applications, China envisions a world of great-power competition in which no shots need to be fired.
The authors then suggest that President Biden should “create a process for aligning government investments with national priorities” and explicitly call for:
...building national data sets for research purposes, along with improved privacy protections to reassure people whose information ends up in them. Such data sets would be particularly useful in accelerating progress in the field of artificial intelligence, which feeds off massive quantities of data—something only the government and a handful of big technology companies currently possess.
In other words, the US needs to create its own version of Europe’s GAIA-X. It’s worth noting that In-Q-Tel was one of the first investors in Palantir. In his last days in office, President Trump launched the National Artificial Intelligence Initiative Office, providing a governance for a “national AI strategy” that “will serve as the central hub for Federal coordination and collaboration in AI research and policymaking across the government, as well as with private sector, academia, and other stakeholders.”
Palantir’s patient investors have played the long game and have seemingly won. Under the Federal Acquisition Streamlining Act of 1994, the government must procure commercial solutions based on “best value”. With 20 years of experience in what was formerly a highly niche area of software, and the ability to deploy Palantir’s platform in a matter of hours, it is unlikely that anyone else can offer anything close.
First is the obvious macro trend that is almost become venality that the world is becoming a software world and the institutions that survive and thrive provide benefits to their citizens. Both in terms of actual output, but also real output. Meaning output that includes data protections that make sure that the data is actually preserved in a way that guarantees its veracity, that gives comfort to its citizens, that make sure the citizens aren’t merely a product to be monetized.
We’re in a context where it is basically binary. Institutions and societies, which implement software effectively, will dramatically outperform.
-Alex Karp (2/19/2021)
Palantir's biggest risks are a powerful and politically connected list of enemies that includes Google and George Soros. Some of these enemies have big reasons to ensure that any association with Palantir is controversial; putting the power of data and analytics in the hands of a diverse group of organizations is a threat to the highly centralized status quo. Palantir's success will rest on Alex Karp's ability to navigate criticism. Fortunately for Palantir investors, the company seems to be gaining traction and momentum, based on a constant stream of new partnerships and customer announcements.
On the other hand, Google's position may be more vulnerable than it seems. TSMC has demonstrated the power of industry collaboration, toppling the mighty Intel. Perhaps we will see a similar dynamic with IBM focusing on quantum hardware, Microsoft focusing on a quantum programming language, and Palantir focusing on a collaborative data platform for deployment. Google has to juggle a shift towards the industrial internet, quantum computing R&D, apps that offer privacy-focused alternatives, and mounting criticism of its entire business model. This is in addition to an armada of antitrust probes.
Despite this obsoletion risk, Google remains one of most powerful corporations in history, complete with detailed psychographic profiles that cover a significant portion of the population. The ability to combine their vast data troves with the power of quantum computing is truly a mind-bending possibility. We are neutral on Google's outlook. Google could become the quantum era's Arasaka or its AOL.
We are very bullish on Palantir, but will readily admit the difficulties of valuing such a business. It is more of an art than a science. If Palantir succeeds, it will have a software platform that connects and manages entire industries. This is enough to make Palantir's revenue guidance seem conservative, at the same time that leveraging AI/ML will decrease the costs associated with scaling the business, using computers to write code. This is something we discussed in a previous article.
The situation is further augmented by the fact that the government is likely to step in and accelerate adoption of such technology, to keep pace with Europe's GAIA-X and China's Alibaba, which recently unveiled a secret 3-year project to develop a prototype smart factory that runs on data and AI.
How big is this opportunity? Palantir believes its addressable market is $119B. Palantir's partners, such as IBM, all have a vested interest in seeing the company succeed. The analysis is yet further complicated by the fact that Palantir's platform has significant network effects. The more companies join, the more value it can potentially provide.
We are going to be the most important software company in the world, and people will figure out what that's valued over a long period of time. And we're very comfortable with investors toying around, 'it could be like this it could be like that'.
We are going to deliver the world's best software with the world's most efficient way of delivering it. Investors will decide what's that what's that work is worth to them. And I think you'll find in a number of years that there will be a consensus: Palantir is a truly special software company that is arguably the most important software company in the world.
-Alex Karp (9/30/2020)
Suppose that investors realized that Palantir's opportunity is, in our view, bigger than that of Snowflake Inc. If Palantir were to trade at a similar multiple, it would value the company at roughly $100B, or $55/share. If Palantir could capture 8% of this $119B market, it would be worth $105/share, assuming a 20x revenue multiple, half of what it trades at currently.
Therefore, we are long Palantir with a price target of $50 in the near term and $100 in the medium term. But suppose that Palantir could capture 30% of the market... it could be worth $400 in the long term, with the added tailwind of significant network effects providing a roadmap to future growth.
The old adage on Wall Street has always been that 'this time it's never different.' Some once suggested that the internet itself was an overhyped fad (example 1 | example 2), particularly after the tech bubble. So far the future has been a place where in hindsight, the facts are stranger than fiction.
This article was written by
Disclosure: I am/we are long PLTR, GOOGL, MSFT, AMZN. I wrote this article myself, and it expresses my own opinions. I am not receiving compensation for it (other than from Seeking Alpha). I have no business relationship with any company whose stock is mentioned in this article.