Rechercher dans ce blog

Thursday, August 31, 2023

Google Launches Tool That Detects AI Images In Effort To Curb Deepfakes - Forbes

Fake images and misinformation in the age of AI are growing. Even in 2019, a Pew Research Center study found that 61% of Americans said it is too much to ask of the average American to be able to recognize altered videos and images. And that was before generative AI tools became widely available to the public.

Adobe shared August 2023 statistics on the number of AI-generated images created with Adobe Firefly reaching one billion, only three months after it launched in March 2023.

In response to the increasing use of AI images, Google Deep Mind announced a beta version of SynthID. The tool will watermark and identify AI-generated images by embedding a digital watermark directly into the pixels of an image that will be imperceptible to the human eye but detectable for identification.

Kris Bondi, CEO and founder of Mimoto, a proactive detection and response cybersecurity company, said that while Google’s SynthID is a starting place, the problem of deep fakes will not be fixed by a single solution.

“People forget that bad actors are also in business. Their tactics and technologies continuously evolve, become available to more bad actors, and the cost of their techniques, such as deep fakes, comes down,” said Bondi.

“The cybersecurity ecosystem needs multiple approaches to address deep fakes, with collaboration to develop flexibly architected approaches that will evolve to meet and surpass the bad actors' technology,” added Bondi.

Ulrik Stig Hansen, co-founder of Encord, a London-based computer vision training data platform, said that there is little doubt synthetic data detection will be one of the significant challenges ahead.

"We've seen it over and over with new technologies, and it's no different with generative AI — just as it's being used in overwhelmingly positive ways (e.g., cheaper diagnostics in healthcare, faster disaster recovery), there'll be vulnerabilities for those looking to exploit," added Hansen.

"It'll be more a matter of how quickly the preventative applications can progress compared to the bad guys and how regulation will shape around the space," said Hansen. "We've seen some indications of what this might look like in the EU, but the key will be to enable the progress of positive applications while building solid guardrails to limit misuse."

Digital watermarking, a term created by Andrew Tirkel and Charles Osborne in 1992, is a way to identify the origin and authenticity of images. Another method is through images’ metadata, but that can be removed or modified, which diminishes trust in the image's authenticity.

Dattaraj Rao, who is the chief data scientist at Persistent Systems and holds 11 computer vision patents, said watermarking has traditionally been used to protect image copyrights, but it can damage and modify the content.

"Using this method for AI-generated images, which have been in use for several years, is a great improvement," said Rao in an email interview. "Although the major challenge will be for all enterprises and users to adopt a single standard for this—we still have not agreed upon a single format for storing image data; hence, we have GIF, JPEG, PNG, etc."

Because AI technology is evolving rapidly, someone will find a way to break into this watermark and override it, Rao said.

"That's what happened with visible watermarks,” he added. “Today, multiple algorithms can detect and fill the watermarked pixels of the image with best guess colors based on surroundings.”

A 2017 Google research paper looked at the vulnerability of watermarking techniques that lie in the consistency of watermarks across image collections. To counter the consistency, the idea would be to introduce inconsistencies when embedding the watermark in images.

“The computer vision engineer inside me feels that using imaging techniques is not a long-term solution here—at the end of the day, an image is an array of pixel color intensities, which can easily be manipulated,” said Rao. “This problem will need a generic solution for protecting digital content using techniques like cryptography.”

Today, we know that some websites are safe based on public key encryption provided by TLS certificates, which are issued by certain approved agencies, Rao explained. “Similarly, we will probably need a way to verify any digital content,” he said. “Technologies like blockchains and digital ledgers can help create a decentralized, immutable register for digital content so you know the complete lineage for any image or Word document on the internet, but this, of course, is difficult to enforce.”

Rao added that whichever method succeeds, the challenge will be in developing the standard and getting it endorsed by multiple organizations and countries globally.

In July 2023, the White House hosted a meeting with seven leading AI companies, including Google and OpenAI. Each company pledged to create tools to watermark and detect AI-generated text, videos and images.

The Pew Research Center study also showed that 77% of U.S. adults said that steps should be taken to restrict altered videos and images intended to mislead, but only 22% said they preferred protecting the freedom to publish and access them.

Neil Sahota, a futurist, lead artificial intelligence advisor to the United Nations and author of Own the AI Revolution (McGraw Hill), said we can and should equip more people on how to verify the authenticity of images to ensure accuracy.

“This includes having companies step up to the digital plate. The watermarking idea has been out there for a while,” said Sahota. “It will help to some degree, but the biggest problem is that the watermarks can be spoofed.

“One of the advantages physical watermarks have is that they can use things like ultraviolet ink, so that part of it is invisible, and we haven’t figured out how to do that with an e-watermark,” said Sahota.

“If Google’s solution has the ability (which would make it much harder to spoof), then this would be a tremendous leap forward,” he added.

Adblock test (Why?)


Google Launches Tool That Detects AI Images In Effort To Curb Deepfakes - Forbes
Read More

No comments:

Post a Comment

Nothing announces its OnePlus Nord rival ‘Phone 2a’, says it is better than Phone 1 - The Financial Express

Nothing made a bunch of announcements today. Stand-out among them was the official name drop of its next smartphone. The phone will be call...