Bipartisan Senators provide a roadmap for AI policy in the U.S. Senate

Bipartisan Senators provide a roadmap for AI policy in the U.S. Senate

The Bipartisan Senate AI Working Group released a roadmap for AI policy in the U.S. Senate, encouraging the Senate Appropriations Committee to fund cross-government artificial intelligence research and development projects, including research for biotechnology and applications of AI that would fundamentally transform medicine. 

The Group acknowledges AI’s various use cases, including those within the healthcare setting, such as improving disease diagnosis, developing new medicines, and assisting providers in various capacities. 

Senators wrote that relevant committees should consider implementing legislation that supports AI deployment in the sector. They should also implement guardrails and safety measures to ensure patient safety while ensuring the regulations do not stifle innovation. 

“This includes consumer protection, preventing fraud and abuse and promoting the usage of accurate and representative data,” the Senators wrote. 

The legislation should also provide transparency requirements for providers and the general public to understand AI’s use in healthcare products and the clinical setting, including information on the data used to train the AI models. 

The Roadmap states that committees should support the National Institutes of Health (NIH) in developing and improving AI technologies as well, specifically regarding data governance and making data available for science and machine learning research while ensuring patient privacy. 

Department of Health and Human Services (HHS) agencies, like the Food and Drug Administration (FDA) and the Office of the National Coordinator for Health Information Technology, should also be provided with tools to effectively determine the benefits and risks of AI-enabled products so developers can adhere to a predictable regulatory structure. 

The senators wrote that committees should also consider “policies to promote innovation of AI systems that meaningfully improve health outcomes and efficiencies in health care delivery. This should include examining the Centers for Medicare & Medicaid Services’ reimbursement mechanisms as well as guardrails to ensure accountability, appropriate use, and broad application of AI across all populations.” 

The Group also encouraged companies to perform rigorous testing to evaluate and understand any potential harmful effects of their AI products and not to release products that do not meet industry standards. 


In December, digital health leaders provided MobiHealthNews with their own insights into how regulators should configure rules around AI use in healthcare.  

“Firstly, regulators will need to agree on the required controls to safely and effectively integrate AI into the many facets of healthcare, taking risk and good manufacturing practices into account,” Kevin McRaith, president and CEO of Welldoc, told MobiHealthNews.

“Secondly, regulators must go beyond the controls to provide the industry with guidelines that make it viable and feasible for companies to test and implement in real-world settings. This will help to support innovation, discovery and the necessary evolution of AI.”

Salesforce senior vice president and general manager of health Amit Khanna said regulators also need to define and set clear boundaries for data and privacy. 

Regulators need to ensure regulations do not create walled gardens/silos in healthcare but instead, minimize the risk while allowing AI to reduce the cost of detection, delivery of care, and research and development,” said Khanna.

Google’s chief clinical officer, Dr. Michael Howell, told MobiHealthNews that regulators need to think about a hub-and-spoke model. 

“We think AI is too important not to regulate and regulate well. We think that, and it may be counterintuitive, but we think that regulation well done here will speed up innovation, not set it back,” Howell said.

“There are some risks, though. The risks are that if we end up with a patchwork of regulations that are different state-by-state or different country-by-country in meaningful ways, that’s likely to set innovation back.” 

Source link