The Generative AI Stock-Market Ruckus, Per the Experts

This week The Wall Street Journal published an article by Charley Grant about generative AI’s impact on public stocks, ChatGPT Is Causing a Stock-Market Ruckus(1). Mr. Grant points out that “generative AI” was mentioned 228 times on earnings calls in Q1, and that we are on pace to at double that number in Q2 (data courtesy of AlphaSense). Meanwhile, the share price of Nvidia (NVDA) has doubled and that of Chegg (CHGG) has halved. The pace of technological change is accelerating, which means that investors need to accelerate their expert insights research process. That’s the topic of our new ebook: The Expert Insights Flywheel: Maximize Return on Your Time.

Nvidia Expert Call Transcripts

These two transcripts stood out based on their length, the technicality of the questions, and the ideal experience of the expert (a former AI/ML Solution Architect for Amazon Web Services).

NVDA #1 – Industry Expert Believes Organizational Alignment and Compatibility With MLOPs Tools Is the Biggest Factor in Determining AI Hardware Selection

The expert makes three main points:

  1. Advanced AI users with many models with high sustained throughput use GPUs for inference, justifying the higher price of GPU instances
  2. Organizational alignment and compatibility with MLOPs tools is the biggest factor in determining AI hardware selection
  3. The latest generation GPUs are typically selected more for latest generation networking for distributed training as the chip performance is less important

What we really want to highlight here, or maybe we should say show off, are the excellent questions from the analyst. One of the huge benefits of the expert transcript library is the ability to see what questions your peers are asking, not just to give you head start in understanding a new space, but to uncover potential blind spots in your own investment theses. Consider this back and forth.

Expert: Now, we may see more people using A100 as part of the inference because those now-established models wouldn’t be able to be loaded into T4 Inferences. They’ll be too small. That’s probably why you see that, but I can tell you that it is almost unfeasible just because of the price of having this A100 out of your inference stack. Typically, that’s a trade-off.

Analyst: Sure. I understand that’s a very niche thing. I’ve been looking into those companies and use cases as well. It does seem like a verified error. Just to help me understand the technical reasons, what if you could use many more T4 Inferences and spread your model across multiple sources having it all in one server with A100? Is that a feasible approach or does that not make sense?

Expert: No, absolutely. That would be, I would say, the next evolution. Can you shard your model across multiple smaller Inferences such as the T4? The problem is the interconnect or the networking to put in capability between those instances and can they then satisfy the SLA required for the end user? Obviously, there’s a lot more benefit if you can run your model from one set of instances, you don’t have to go distributed. I don’t think they’re quite there yet.

NVDA #2 – Industry Expert Thinks That GPUs Account for ~20-25% of Inference Instances, and This Is Growing

This is a follow-up call by the same analyst with the same expert, which is easy to determine from the first question: 

Analyst: Just starting it up at a high level, last time you talked about how the mass market isn’t even using accelerators for inference at this time. I’d like to try to add scale. I’d like to put some rough numbers to that. If you could take a guess or even provide a range, if you look at the broader enterprise AI/ML market, what percent is using CPUs in production at scale, or how would you think of answering that question?

Expert: The way I would probably estimate this is that there’s a low single-digit percentage of inference endpoints that are performed on ASIC-oriented architectures. I would say you’re probably getting close to 20%-25% of inference activities done on GPUs and what I would call parallel processors. There’s probably, the remainder of it would be on traditional CPU cores.

That’s just the first three minutes of the call, which covers a lot of ground:

Table of Contents

  1. Percentage of AI ML market using CPUs in production
  2. Low- to mid-single digits overview
  3. Horsepower needed to run different models
  4. ML versus deep learning adoption rates
  5. Heavy users demanding accelerated computing training and inference
  6. How transfer learning works
  7. Pre-trained models in inferences
  8. Retailers in large recommender systems
  9. GPUs for inference overview
  10. V100 customer compared to an A100 customer
  11. Moving down from P3 use

This is a classic example of the benefits of expert networks: there is far more knowledge in this expert’s mind than could be covered in two hours of questions from a highly informed investor. Discovering great experts like this one is a huge benefit of the expert transcript library. You know you won’t be wasting your time when you schedule a call to ask your own questions.

Chegg Expert Call Transcripts

CHGG #1 – Industry Expert Thinks ChatGPT Has the Potential to Be Very Disruptive in the Education Field 

This call with an Adjunct Professor of Data Science is from early February, shortly after a public demo of ChatGPT versus Chegg. ChatGPT got the majority of math questions wrong. That turned out to be highly misleading.

First, because some students would take the risk with an error-prone ChatGPT for cost reasons

I think a college student would take the risk if they were hard up for the cash and didn’t want to shell out for something like Chegg.

More importantly, the model improves based on user feedback.

The people who get the responses are able to give feedback. For example, if they use [ChatGPT] to cheat on an exam, they can go back and be like, “Yes, this was great. I got an A+ answer for this” or they will go back and be like “No, this did terrible. I got this wrong.” The more people who use it, the better it’s going to get, and that’s just how these models work. This thing has only been out, again, less than two months.

CHGG #2 – Former Thinkful Director Believes the Thinkful Acquisition Was a Bit of a Mismatch for CHGG 

Most of this call is about Chegg’s acquisition of Thinkful. The expert, a former director of Thinkful, was not optimistic that the company’s model would be able to compete with AI-based tools. Prescient!

If you think about just the path of innovation, I think that there’ll be of lower cost and especially with AI and all the AI being more and more introduced in education. I think that type of model where you have to have a lot of human involvement in the learning experience is just in the end, it’s going to be too expensive. Thinkful’s model, I think, has relatively limited upside growth.

This quote was only six months ago, when CHGG’s share price was in the high twenties. Not just prescient, but timely. You can increase your speed to insight by signing up for a free trial of Stream by AlphaSense.



  1. The Wall Street Journal, ChatGPT Is Causing a Stock-Market Ruckus , 5/9/2023

Image generated by DALL-E 2 (prompt: “business robots throwing paper in the air buying and selling stocks at the new york stock exchange”)

Austin Moorhead
Austin Moorhead
Content Marketing for Stream by AlphaSense

Austin’s primary experience is in consulting and private equity, though he’s also a published author.

Read all posts written by Austin Moorhead