I was planning to issue a Buy recommendation on Palantir (PLTR) in this Thursday’s letter, but after the company reported a great quarter today Wall Street is clobbering the stock in the aftermarket. That’s dumb, and I want you to take advantage of them. I’ll have the basic write-up on Thursday. PLTR is an outstanding Big Tech dominator of AI software. I recommended PLTR in Boomberg on October 28, 2022 at $8.64 Their business has improved so rapidly that it’s still cheap. Here is this week’s Boomberg write-up:
Palantir (PLTR) reported a good March quarter, raised guidance for the year, and saw its stock drop. This Seeking Alpha headline about sums it up:
Revenue grew a solid 20.8% from last year to $634.33 million, well above the high end of their guidance and the consensus for $617.61 million. They closed 87 deals over $1 million, including 27 over $5 million and 15 over $10 million. Commercial was very strong. US commercial grew 40% year-over-year and 14% quarter-over-quarter.

Pro forma earnings of eight cents a share matched the consensus estimate. The $106 million in profit was the largest quarterly profit in Palantir’s 20-year history.
On the conference call (INVESTOR LETTER HERE and SLIDES HERE and TRANSCRIPT HERE), they said the way they sell AIP is the secret sauce. They run five-day AI Bootcamps that give potential customers real-time, hands-on use of the system. As CEO Alex Karp wrote: “Other companies engage in intricate and elaborate efforts to sell and market their offerings. Their resources are focused on marketing at the expense of actually constructing the software and building the systems that they hope to sell.
“We have taken a different approach and are now investing even more heavily in simply letting potential partners use our software in order to decide what works and what does not for themselves. In the last quarter alone, we conducted more than 660 bootcamps with organizations across industries and sectors, providing potential partners with an opportunity to test and begin building on our platforms. These enterprises cannot wait for custom solutions to be built. And they are unwilling to invest in software systems that might work someday down the road. They need results now. And we believe that we have the only platform that works….
“Our intention is to make our Artificial Intelligence Platform (AIP) the most dominant infrastructure in the market and power the effective deployment of artificial intelligence and large language models across institutions.”
The CFO said: “We are also seeing substantial deal cycle compression. As one example, a leading utility company signed a seven-figure deal just five days after completing a bootcamp. Another customer immediately signed a paid engagement after just one day of their multi-day bootcamp and then converted to a seven-figure deal three weeks later. We expect the favorable unit economics and higher throughput to continue to accelerate our business.”
They guided the June quarter to revenue between $649 million and $653 million, well above the $643.39 million estimate.
For the full year they raised their revenue guidance to between $2.677 billion and $2.689 billion, right on but not higher than the $2.68 billion consensus. That’s why some short-term traders sold the stock – they wanted a bigger annual raise in light of the strong March quarter. Palantir will, of course, solidly beat guidance.
Importantly, they raised US commercial guidance to “in excess of $661 million,” which is at least 45% growth. This time last year the bears were saying PLTR had to show commercial adoption as well as governments. They expect adjusted free cash flow between $800 million and $1 billion, and said they will have positive GAAP operating income and net income in each quarter of this year.
Click for larger graphic h/t @StockMarketNerd
After the selloff, Wedbush’s Dan Ives wrote: “We are laser focused on the AI story playing out with AIP leading the way and Palantir delivered robust numbers on this front yet again. We believe any modest sell-off post print is a golden buying opportunity for this pure play AI name.” Ives has an Outperform rating and $35 price target.
Palantir CTO Shyam Sankar gave a keynote on “American Prosperity is National Security” at this week’s AI Expo for National Competitiveness. He said AI is the next American birthright. Wielding AI to increase readiness and mobilize the industrial base requires building solutions that scale to meet the context of the mission.
Palantir ended the quarter with $3.9 billion in cash and no debt. PLTR is a Buy under $22 for the AI Decade ahead.
Print This Post






1st
Have owned it for 2 years.
AI is a disrupter, not in technology, but in destroying human civilization. It is a tool for censorship, which outweighs whatever benefits it may have for the profit margin by eliminating human workers. As to AI’s claim to efficiently comb the internet for info far surpassing human capabilities, this completely misses the point of why human judgment will always be far superior. Read on.
A few weeks ago, I posted how I couldn’t write on YMB an analysis of P/E and P/S ratios concerning TGTX. Subscriber Michael said that Yahoo’s AI algorithms are primitive. I’ll accept the truth of his remark. Here’s my latest example.
On SCYX, someone mentioned MTNB, whose drug product MAT 2203 is an innovative reformulation of amphotericin B, the long standing standard of care for serious fungal infections. MAT 2203 uses proprietary liposomal technology to let oral bioavailability approach the efficacy of intravenous delivery, while possibly offering much less risk of kidney toxicity. SCY 247 has yet been unproven, and is only starting early trials. No doubt 247 can be further improved by using liposomal technology, just as Dr. Chris Shade has done at Quicksilver Scientific by custom designing different types of liposomes to suit different nutrients for much better bioavailability and thus better efficacy. But then SCYX would have to pay MTNB licensing fees for this.
I posted a paraphrase of this last paragraph on YMB for SCYX. It was rejected as being “uncivil.” I tried several shorter versions of this, which were all rejected as being “uncivil.” If anyone thinks this is “uncivil” they are a mentally deranged nitwit akin to a typical woke advocate who calls people “racist” for merely having a different but educated opinion. This is the danger of AI censorship which destroys productive free exchange of ideas. In so doing, it is the tool of tyrants whose goal is control and subsequent murder of the human race in the manner of Hitler, Stalin, etc.
Some will make lots of money on AI, and already have. But AI investing is the epitome of socially irresponsible investing, like investing in tobacco companies that kill lots of people from cancer and vascular disease, or sugar beverage companies that are the biggest contributors to diabetes which kills billions of people. When I see diabetic patients who are way out of control with hemoglobin A1c way over 10, invariably they are drinking soda and using sugar in coffee/tea.
Get this straight–the most sophisticated AI is no match for human judgment. AI can integrate more data than any person can, but that is not prudent judgment. It is better to have a committee of people make assessments using less data, than AI have a junk pile of 1000’s of times more data but cannot properly formulate courses of actions that people are comfortable with.
School kids are using AI to generate papers that may look intelligent. Suppose no teacher without AI can do as good a job. But the kid has suffered because he has bypassed the thought process to solve problems on his own. His cognitive abilities decline, his brain atrophies from disuse. I predict that if AI becomes dominant, we are going to see a major increase in incidence of serious neurological diseases like Alzheimers, Parkinsons, etc. Also, a warped psychopathological state of humanity. No psychiatrist or psychologist will have the means of helping these victims. That’s murder in the worst way.
Teachers are already ahead of students by feeding their papers into another large language model to determine if it is genuine.
AI is just another development tool. It’s actually been around for awhile to analyze large data sets.
All new software development raises fear in people. AI is agnostic, as you can see with Yahoo message boards, it’s garbage in garbage out.
I think the biggest problem out there is the constant use of smart phones and social media platforms. Kids and adults are addicted to both. Facebook is probably the worst company out there, I would never invest.
So teachers are also losing their own cognitive abilities by being lazy–feeding student papers into another AI crap language model. The teacher should be pointing out students’ errors of logic, paragraph organization, etc. What will happen to student debate teams and inductive reasoning? Worst of all, doctors who don’t think any more. Electronic records are garbage. Diagnosis codes don’t tell the real story of the patient. Fortunately, I am in an office where I hand write details of patient history and my personal thoughts on my approach. None of this is translatable into diagnostic categories and codes. The medical assistant does this scutwork for the EMR which is required for insurance reimbursement. The EMR info has little to do with the real truth about the patient as written by me. Typically I get 7 pages of notes from specialists I refer patients to. Most of it is electronically generated irrelevant info, often completely false. The relevant info is buried amongst the garbage. One urologist writes for all his patients a statement, “treatment plan was discussed.” That is just the software generating that statement. When I see the patient again, I ask him what the doctor said. The patient doesn’t know, and I show him that useless one-size-fits-all statement. I circle that statement angrily and tell the patient to go back to the urologist and insist that the doctor write down in his own handwriting WHAT treatment plan he recommended. Many docs are fed up with the formats, and they copy/paste old info to save time. There have been malpractice cases from the incorrect outdated info that is passed along. Note that you order lab tests yourself through Life Extension. This is because your doctor doesn’t have time to give you proper evaluation because his time is wasted on complying with stupid regulations and electronic garbage.
Yahoo is not agnostic. It is woke, socialist in its censorship of independent thinkers. Its news coverage is not to be trusted. Even higher quality AI will always have the bias of whatever person programmed it, setting parameters to suit his bias. Google censors real doctors who know the prevalent dangers of the cough-id vex and so on.
Smart phones are a tangible example of how constant attention to the virtual fake world is putting kids out of touch with reality. It is creating more sociopaths and cases of mental illness. Yes, Facebook is one of the worst out there, a big user of AI to create altered reality and destroy minds.
I was glad to dump Chinese stocks. I took my 30% loss. Anybody who buys Chinese stocks is participating in the destruction of the free world that the CCP seeks. Censorship and tyranny is the tool of the CCP, enabled further by AI.
There are a few examples of software development that I like, such as VLD’s which enhances the value of their 3D printers. I doubt this has much to do with AI.
TLDR: I didn’t say Yahoo was agnostic, I said AI was agnostic.
Teachers can use AI against the students, warning them that they will get an F if their papers were found to be generated, They can then save their cognitive abilities to read REAL papers.
Also, I use Lab Corp for blood work because I’m experimenting with paleo/keto and some supplements and I’m curious how it affects liver function, glucose levels and lipid panels. I’m doing multiple tests over this year. It has nothing to do with my doctor or in my case a P/A. It’s like blood pressure, why would I go to the doc when I can do it myself?
You should really read up on AI.
I also have owned PLTR for about a year. They have recently turned the corner on profitability but that Alex Karp guy is REALLY odd.
New World Investor for 5.9.24 is posted.