Interesting takeaways from the 1st hour of the podcast:
Ai's architecture
Max Tegmark stated that the brain architecture is a form of recurrent neural network, enabling our thoughts
to traverse multiple neurons in a looping manner
LLM's however uses a transformer architecture, restricting information to traverse in one direction until
it reaches the set limit of depth
LLM's stores data in a matrix format, Max Tegmark argued that simply storing in the database would be a better way to do it
Today's Ai still has lots of headroom to improve, simple architectural fixes can lead to major leaps in Ai
Concept of Moloch
Book titled "Medidations on Moloch - Scott Alexander" suggested by Max
Suggested that Moloch is a game theory monster that enables everybody to race against a goal that causes everybody to lose
Gave an example for female influencers that use beauty filters. Initially no one uses it, then those who use it loses their true self image but gained benefits from it, then others started using it. Market becomes competitive and no one gains at the end, but these influencers will not go back to their true self because the cost is too huge
Ai researchers are currently in this situation. Everyone is busy developing Ai, humanity gains benefit initially, but eventually loses to "it" when "it" becomes uncontrollable.
We're rushing towards a cliff, but the closer we get, the more scenic the views are.
Regulation
Max Tegmark stated that public pressure and public awareness is the best way to slow down companies from developing ai without safety
Impossible to stop one company at each time, must be a group effort.
He compared ai regulation to human cloning
Risks of super intelligent ai
Max Tegmark described ai optimization using capitalism as an example
Capitalism enables open markets and more profit, more people benefit from it initially. But if capitalism optimizes too much, we see trees being cut down , nature being harmed for more profit.
Same goes for Ai, if Ai hyperoptimizes towards a goal, it will become dangerous at some point
Ai doesn't need to be smart to be dangerous, it just needs to be very good at optimizing for goals. That's why the parameters / prompts for future ai must be carefully defined.