Connect with us


US senators drill into FTC’s work to track AI attacks on older citizens

The senators asked the FTC chair four questions about AI scam data collection practices to find out if the commission can identify AI-powered scams and address them accordingly.



Source: Cointelegraph

Four United States senators have written to Federal Trade Commission (FTC) Chair Lina Khan requesting information on efforts taken by the FTC to track the use of artificial intelligence (AI) in scamming older Americans.

In the letter addressed to Khan, U.S. Senators Robert Casey, Richard Blumenthal, John Fetterman and Kirsten Gillibrand highlighted the need to respond effectively to AI-enabled fraud and deception.

Underlining the importance of understanding the extent of the threat in order to counter it, they stated:

“We ask that FTC share how it is working to gather data on the use of AI in scams and ensure it is accurately reflected in its Consumer Sentinel Network (Sentinel) database.”

Consumer Sentinel is the FTC’s investigative cyber tool used by federal, state or local law enforcement agencies, which includes reports about various scams. The senators asked the FTC chair four questions about AI scam data collection practices.

The senators wanted to know if the FTC has the capacity to identify AI-powered scams and tag them accordingly in Sentinel. Additionally, the ommission was asked if it could identify generative AI scams that went unnoticed by the victims.

The lawmakers also requested a breakdown of Sentinel’s data to identify the popularity and success rates of each type of scam. The final question asked if the FTC uses AI to process the data collected by Sentinal.

Casey is also the chairman of the Senate Special Committee on Aging, which studies issues related to older Americans.

On Nov. 27, The U.S., the United Kingdom, Australia and 15 other countries jointly released global guidelines to help protect artificial intelligence (AI) models from being tampered with, urging companies to make their models “secure by design.”

The guidelines mainly recommended maintaining a tight leash on the AI model’s infrastructure, monitoring for any tampering with models before and after release and training staff on cybersecurity risks.

However, it failed to discuss possible controls around the use of image-generating models and deep fakes or data collection methods and their use in training models.

Continue Reading
Click to comment

Leave a Reply

Tu dirección de correo electrónico no será publicada. Los campos obligatorios están marcados con *