AI & SecurityMEDIUM

AI Security - Veritone Automates PII Removal Process

HNHelp Net Security
🎯

Basically, Veritone helps companies remove personal information from data before using it for AI.

Quick Summary

Veritone has launched a new tool to automate the removal of personal information from data used for AI. This affects organizations needing compliant datasets. Protecting sensitive data is crucial for ethical AI deployment.

What Happened

Veritone has introduced an innovative solution called Veritone Redact in conjunction with its Veritone Data Refinery (VDR). This tool automates the removal of personally identifiable information (PII) and other sensitive data before processing. As the demand for AI-ready data surges, ensuring that this data is clean and compliant is more critical than ever. Ryan Steelberg, CEO of Veritone, emphasized their commitment to safeguarding valuable data assets while promoting ethical AI usage.

The rise of AI applications has put immense pressure on companies to ensure that their training data is not only properly licensed but also devoid of sensitive information. With the stakes so high, Veritone's solution aims to streamline this process, allowing organizations to innovate without legal or ethical concerns.

Who's Affected

The introduction of VDR will benefit a wide range of organizations, particularly those in the public sector and industries that rely on data-driven insights. Law enforcement agencies, legal firms, and corporate entities will find this tool invaluable as they navigate the complexities of data compliance. The demand for VDR has already surged, with data processed increasing by 3.5 times in the latter half of 2025 compared to earlier in the year.

This tool is particularly relevant for enterprises and hyperscalers that are under pressure to provide compliant datasets for AI training. As AI models grow in complexity, the need for clean and ethically sourced data becomes paramount. The ability to automatically redact sensitive information helps organizations meet strict compliance standards while fostering innovation.

What Data Was Exposed

The primary concern addressed by Veritone's Redact tool is the potential exposure of PII during the data processing stages. This could include names, addresses, phone numbers, and other sensitive identifiers that, if mishandled, could lead to serious privacy violations. The automation of this process not only increases efficiency but also significantly reduces the risk of human error in manual redaction.

Veritone Redact has been traditionally used by public sector clients, including the Department of Justice and various law enforcement agencies. With enhancements like AI-powered voice masking and transcription capabilities in multiple languages, the tool is set to transform how sensitive data is handled across various sectors.

What You Should Do

Organizations looking to leverage AI should consider implementing Veritone's solutions to ensure their datasets are compliant and ethically sourced. Here are some steps to take:

  • Evaluate your data: Identify any PII or sensitive information that may be present in your datasets.
  • Implement automated tools: Consider using Veritone Redact to streamline the PII removal process.
  • Stay informed: Keep abreast of industry compliance standards and legal requirements regarding data usage.

By taking these proactive measures, organizations can not only protect their data but also foster a responsible AI ecosystem that prioritizes user privacy and ethical practices.

🔒 Pro insight: Veritone's approach sets a benchmark for ethical AI data handling, addressing compliance challenges in rapidly evolving AI landscapes.

Original article from

Help Net Security · Industry News

Read Full Article

Related Pings

HIGHAI & Security

AI Security - CISOs Struggle with Legacy Tools and Skills

A new report reveals that security leaders are struggling to secure AI systems effectively. With outdated tools and skills, organizations face significant risks. It's time to address these gaps in AI security.

The Hacker News·
HIGHAI & Security

AI Security - Jozu Agent Guard Launches for AI Agent Control

Jozu has launched Agent Guard, a new tool to secure AI agents from bypassing controls. This affects organizations using AI technologies without proper security measures. The tool aims to close governance gaps and protect corporate assets effectively.

Help Net Security·
HIGHAI & Security

AI Security - Proofpoint Introduces Intent-Based Detection

Proofpoint has launched AI Security to combat AI-related threats. This solution helps organizations secure AI interactions, addressing urgent security challenges. With increasing AI use, protecting data is critical.

Help Net Security·
MEDIUMAI & Security

AI Security - Enhancing Code Guidance with LLMs Explained

Mark Curphey explores how LLMs can enhance secure coding practices. He stresses the importance of clear documentation and authoritative sources for effective AI training. This conversation sheds light on the future of coding in an AI-driven world.

SC Media·
HIGHAI & Security

Google Cracks Down on Android Apps Abusing Accessibility

Google has tightened restrictions on Android apps using accessibility features. This change aims to curb malware exploitation and enhance user security significantly. Users should enable Advanced Protection Mode for better protection.

Malwarebytes Labs·
HIGHAI & Security

AI Security - Prompt Fuzzing Reveals LLMs' Fragility

Unit 42's latest research reveals that LLMs are vulnerable to prompt fuzzing attacks. This affects organizations using generative AI, risking safety and compliance. It's crucial to strengthen defenses against these evolving threats.

Palo Alto Unit 42·