Vip99 BET.RBET yugioh,RBET Slot

Blockchain & Crypto

Fox’s Verify to Weed Out AI-Generated Media Stories

Fox Corp has released its blockchain-powered tool, Verify, to weed out media stories generated by artificial intelligence. Project Polygon was formed with one goal in mind: to “Verify” or authenticate bona fide articles and images in the hopes of weeding out deepfakes.

Fox’s Verify to Weed Out AI-Generated Media Stories

Fox released the Verify open-source protocol early in January 2024 to help establish the origin and history of registered media. The protocol is built on the Proof of Stake (PoS) blockchain and is currently still in Beta mode.

However, when finally available, Verify will aim to bridge AI platforms with media companies, helping to identify genuine media stories from those generated by AI. Fox’s technology team developed Verify to allow readers to know where images in media stories originated from.

The Verify tool will let publishers register original content in order to prove origination. All individual pieces of content registered will then be cryptographically signed onchain. After registration, consumers will be able to identify content from trusted sources using Verify.

The Proliferation of Deepfakes

Deepfakes have risen as a troubling trend, introducing a sophisticated method for creating deceptive videos that feature well-known personalities. These videos employ digital manipulation to seamlessly transform a person’s face or body, convincingly portraying them as someone else. The primary intention behind deepfakes is often malicious, skillfully substituting one person’s likeness with another.

In a recent case, a Hindu news presenter was manipulated in a deepfake video endorsing a casino product falsely. Instances like these add a layer of risk for individuals in search of authentic gambling experiences, as they may inadvertently be led astray while trying to identify online casinos , falling victim to potential scams and threats.

Artificial intelligence makes it easier for deepfake content to mislead readers. Some of the deepfake images and videos are difficult to distinguish from authentic media. Publishers are also finding out that their content is being used to train AI modules without their consent.

Deepfakes can also tarnish reputations by spreading false information and fake videos created by artificial intelligence. Fortunately, Fox has come up with a solution to combat this problem. The new blockchain-based tool will help verify the authenticity of digital media in the age of AI.

How Fox’s Verify Tool Works

When Fox announced the Verify tool, not all were convinced. In fact, skeptics thought this was a public relations move, and the only way to prove it was to put it to the text. Fox claims consumers can load URLs and images to the Verify system to determine authenticity.

This is possible if the publisher has added them to the database, which is powered by the Polygon blockchain. When a publisher adds content to the database, the metadata and other information are stored for future verification by consumers.

Polygon gives media content on Verify an immutable audit trail, which means Fox or other media houses and entities can’t manipulate the database. The tool simplifies a sophisticated database checker into a simple web app that tracks images and URLs.

In the modern world dominated by large language models, such verifications can help distinguish authentic stories from AI products. This is certainly enough incentive for legacy publishers that are navigating license deals and copyrights.

Utility, Challenges, and Resolution

Determining how Verify will benefit consumers presents many challenges and limitations. The Verify tool provides a text input box where people can enter the URL of the website to determine the authenticity of the media story.

When this is done, users will get various details, including a transaction hash and signature representing the content, associated metadata, licensing information, and even images. The tool can also provide links to other sources that have used the photos.

However, if the images are cropped, screenshot, or altered in the slightest of ways, they won’t be authenticated. Verify also runs into many problems in its current iteration and will require horning to make it a practical tool for consumers. Here are some of the challenges:

  • The average reader is less likely to verify content lifted from a news website
  • Verify can’t tell if the image is AI-generated; it only confirms the source
  • Trusted news outlets sometimes use AI-generated content
  • User apathy. People who want something to be true don’t care whether or not it is true

One way to resolve these challenges is to build the tool directly into applications consumers use to explore online content. An example would be adding Verify to browsers and social media sites where a badge is displayed on content that has been added to the Verify database.

Conclusion

Verify is still in its infancy and leaves a lot of room for improvement. According to Fox’s partner Polygon, Verify aims to bridge the gap between media companies and AI platforms. The tool offers additional features to create new commercial opportunities for content owners.

By using smart contracts to specify conditions for content access, publishers and content owners can get paid by AI platforms and other people using their content. The licensing use case is a key selling point as publishers and AI companies navigate complex legal issues.

The New York Times recently filed a lawsuit against OpenAI and Microsoft, claiming the two tech giants used its content to train AI models without permission. Verify has the potential to provide a framework for AI companies to access content from verified publishers.

Pay Space

7166 Posts 0 Comments

Our editorial team delivers daily news and insights on the global payment industry, covering fintech innovations, worldwide payment methods, and modern payment options.