Senator Cynthia Lummis introduced the Responsible Innovation and Safe Expertise (RISE) Act to protect AI developers from civil liability, allowing professionals to better understand the capabilities of AI before using it. While initial reactions were mostly positive, some critics argue that the bill provides too much protection for AI developers at the expense of transparency and accountability. The legislation focuses on AI usage in professional settings, leaving out cases where there’s no intermediary, like AI chatbots interacting with minors. Concerns arise regarding potential harms and accountability in those scenarios. The proposed bill is seen as a starting point, with calls for clearer standards and better transparency requirements. Comparatively, the EU's AI regulations adopt a rights-based approach, emphasizing individual empowerment and explicit rights for users. The RISE Act, largely risk-based, addresses liability concerns primarily for professional adopters. Experts highlight the need for further refinements before the bill can provide an effective framework for AI accountability and user protection in the rapidly evolving technological landscape.

Source 🔗