The Risks Point to the Need for Greater Regulation of AI

Maria Ovseytseva
Written by Maria Ovseytseva17 dec 2024

The Risks Point to the Need for Greater Regulation of AI, Says Gary Marcus

AI expert Gary Marcus recently called on policymakers to establish comprehensive AI guidelines to ensure integrity and safety.

Speaking at the same forum, Marcus emphasized the urgent need for structures to manage the rapid advancements in artificial intelligence.


Key Details

Marcus was blunt in his assessment, highlighting the potential dangers alongside the benefits of AI. He argued that current rules need significant updates to include:

  • Improved standards for AI system operations, emphasizing accountability and transparency.
  • Addressing issues of bias in AI systems.
  • Ensuring that AI serves humanity and operates under human oversight.

"AI should be a tool that serves humanity, not a force that operates without human oversight," Marcus stated during his presentation.

  • Watch Marcus' presentation on YouTube.

His Recommendations Include:

  • Establishing a global standard of rules for AI use.
  • Developing independent verification systems to validate AI systems.
  • Conducting more research into the ethics of developing ethical AI.

Broader Context

This discussion arises amidst growing concerns about AI misuse in areas such as surveillance, mapping, and voting. Policymakers worldwide are grappling with how to manage the dual-use potential of open-source AI platforms like ChatGPT and other generative AI tools.

Gary Marcus has consistently advocated for responsible AI development. His latest call to action builds on his previous efforts to close the gap between the current capabilities of AI systems and society's ability to use them equitably.


Further Reading and Resources

  • Read more about Gary Marcus' views in GeekWire.
Did you find this post interesting? Share it!

Featured