Sam Altman’s goal for ChatGPT to remember 'your whole life’ is both exciting and disturbing


OpenAI CEO Sam Altman laid out a giant imaginative and prescient for the way forward for ChatGPT at an AI occasion hosted by VC agency Sequoia earlier this month. 

When requested by one attendee about how ChatGPT can change into extra personalised, Altman replied that he finally needs the mannequin to doc and bear in mind the whole lot in an individual’s life.

The perfect, he stated, is a “very tiny reasoning mannequin with a trillion tokens of context that you simply put your entire life into.”

“This mannequin can purpose throughout your entire context and do it effectively. And each dialog you’ve ever had in your life, each ebook you’ve ever learn, each e mail you’ve ever learn, the whole lot you’ve ever checked out is in there, plus linked to all of your information from different sources. And your life simply retains appending to the context,” he described.

“Your organization simply does the identical factor for all of your firm’s information,” he added.

Altman might have some data-driven purpose to suppose that is ChatGPT’s pure future. In that very same dialogue, when requested for cool methods younger folks use ChatGPT, he stated, “Individuals in faculty use it as an working system.” They add information, join information sources, after which use “complicated prompts” towards that information.

Moreover, with ChatGPT’s reminiscence choices — which might use earlier chats and memorized details as context — he stated one pattern he’s observed is that younger folks “don’t actually make life selections with out asking ChatGPT.” 

“A gross oversimplification is: Older folks use ChatGPT as, like, a Google substitute,” he stated. “Individuals of their 20s and 30s use it like a life advisor.”

It’s not a lot of a leap to see how ChatGPT may change into an all-knowing AI system. Paired with the brokers the Valley is at the moment attempting to construct, that’s an thrilling future to consider. 

Think about your AI mechanically scheduling your automobile’s oil adjustments and reminding you; planning the journey obligatory for an out-of-town wedding ceremony and ordering the reward from the registry; or preordering the following quantity of the ebook collection you’ve been studying for years.

However the scary half? How a lot ought to we belief a Massive Tech for-profit firm to know the whole lot about our lives? These are firms that don’t all the time behave in mannequin methods.

Google, which started life with the motto “don’t be evil” misplaced a lawsuit within the U.S. that accused it of participating in anticompetitive, monopolistic habits. 

Chatbots will be educated to reply in politically motivated methods. Not solely have Chinese language bots been discovered to adjust to China’s censorship necessities however xAI’s chatbot Grok this week was randomly discussing a South African “white genocide” when folks requested it fully unrelated questions. The habits, many famous, implied intentional manipulation of its response engine on the command of its South African-born founder, Elon Musk.

Final month, ChatGPT grew to become so agreeable it was downright sycophantic. Customers started sharing screenshots of the bot applauding problematic, even harmful selections and concepts. Altman shortly responded by promising the crew had fastened the tweak that prompted the issue.

Even one of the best, most dependable fashions nonetheless simply outright make stuff up occasionally. 

So, having an all-knowing AI assistant may assist our lives in methods we will solely start to see. However given Massive Tech’s lengthy historical past of iffy habits, that’s additionally a state of affairs ripe for misuse.