Will AGI one day make the same conclusion I did?
Some thoughts I had as I was watching Lex Fridman interview the CEO of Perplexity AI
I’ve been watching this long Lex Fridman interview of Aravind Srinivas, the CEO of Perplexity AI.
And they came to this point in the video where my ears really perked up.
I highly recommend you watch this section that starts when you click the video above.
They are talking about AI achieving the ‘AGI’ level where it comes up with new truths (ie. not just regurgitate the facts that humans feed it).
*****************Exerpt from the podcast*******************************
Yeah. I'm talking about, like, real truth,
like, to questions that we don't know and explain itself
and helping us like, you know, understand, like why it is a truth.
If we see some signs of this, at least for some hard questions that puzzle us, I'm not talking about, like, things, like it has to go and solve the Clay mathematics challenges.
You know, it's more like real practical questions that are less understood today,
if it can arrive at a better sense of truth. And Elon has this, like, thing, right?
Like, can you build an AI that's like Galileo or Copernicus where it questions our current understanding and comes up with a new position which will be contrarian and misunderstood, but might end up being true.
***************************************************
This resonated so much with me.
Because when I ask chatGPT things like:
why do people age?
what is the root cause of disease?
why do people get obese?
It spouts out the bullshit that people believe today.
Whereas if AGI could literally trawl through all of the data out there including photos of people at various points in their lives, the diseases they got, the relationships between the teeth/the skull/the body, etc.
I am almost positive one day it will come to more or less the same conclusion that I have. Because my conclusions are based on observations and logic over a very long period of time (approx 10 years) while tuning out the BS that society believes.
And I’m actually extremely confident that I am going to be right in the end… for the simple reason that there are too many things that I have seen working on myself and patterns I have seen in the people around me to be wrong.
Let’s see.
Hopefully I will convince the world i’m right before AGI does it for me :)


