|
And it also predicts an “apocalyptic” future: “Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization?” What is The Real “Weight” of This Letter? At first, it’s easy to sympathize with the cause, but let’s reflect on all the global contexts involved.
Despite being endorsed by leading technology authorities, such as controversy due to some Afghanistan Email List being inconsistent with their practices regarding security limits involving their technologies, including Elon Musk. Musk himself fired his “Ethical AI” Team last year, as reported by Wired, Futurism, and many other news sites at that time. It’s worth mentioning that Musk, who co-founded Open-AI and left the company in 2018, has repeatedly attacked them on Twitter with scathing criticisms of ChatGPT’s advances.
Sam Altman, co-founder of Open-AI, in a conversation with podcaster Lex Fridman, asserts that concerns around AGI experiments are legitimate and acknowledges that risks, such as misinformation, are real. Also, in an interview with WSJ, Altman says the company has long been concerned about the security of its technologies and that they have spent more than 6 months testing the tool before its release. What Are Its Practical Effects? Andrew Ng, founder and CEO of Landing AI, founder of DeepLearning.
|
|