This is a courtesy translation. The English version is the legally binding document.
Last updated: 2026年2月28日
Vocaid is committed to transparency about how artificial intelligence is used in our platform. This page provides a comprehensive overview of our AI systems, how they generate outputs, their known limitations, and your rights as a user.
We believe that every person who interacts with AI-powered hiring tools has the right to understand how those tools work and how they may affect decisions about their career.
Vocaid uses the following AI systems in its platform. Each system is described with its purpose, the data it processes, and the safeguards in place.
Conducts real-time voice-based interviews using natural language processing
Data Processed
Outputs
Known Limitations
Human Oversight
All AI-generated scores are presented as recommendations. B2B hiring managers retain full decision authority. Candidates may request human review.
Analyzes non-verbal communication signals during video interviews for coaching feedback
Data Processed
Outputs
Known Limitations
Human Oversight
Behavioral signals are supplementary coaching data only. They are never the sole basis for scoring. Users can opt out entirely and use audio-only mode.
Analyzes resume content against job descriptions to provide compatibility scoring
Data Processed
Outputs
Known Limitations
Human Oversight
ATS scores are advisory tools for resume improvement. They do not determine interview access or hiring outcomes.
Verifies that a real person is present during the interview session to prevent fraud
Data Processed
Outputs
Known Limitations
Human Oversight
Liveness verification is a binary pass/fail check. Failed verification can be retried or skipped. It does not affect interview scoring.
Vocaid's interview scoring evaluates candidates across multiple dimensions using AI analysis of their spoken responses. Scores reflect the AI's assessment of demonstrated competencies, not the person's inherent abilities.
Scoring Dimensions
Scores range from 0 to 100 and represent relative assessments. They should be interpreted as coaching feedback, not absolute measurements. Scores may vary across sessions due to question variation, AI model updates, and response differences.
We are committed to ensuring our AI systems do not discriminate against any individual or group based on protected characteristics including race, gender, age, disability, national origin, or other legally protected categories.
Our Bias Prevention Measures
If you believe you have experienced bias in AI-generated scores or feedback, please report it to support@vocaid.ai. We investigate all bias reports and take corrective action where warranted.
VocaidはいかなるAIシステムにおいても感情認識、感情推論、または感情コンピューティングを使用しません。この禁止は単なるコンプライアンス措置ではなく、中核的な設計原則です。
EU AI法(第5条(1)(f))は2025年2月2日から施行され、医療または安全目的を除き、職場および教育機関における感情を推論するAIシステムの使用を明示的に禁止しています。Vocaidの行動分析機能は、事実に基づく観察可能なアウトプットのみを生成します:
これらのアウトプットは観察可能な身体的行動を記述するものであり、自信、緊張、ストレス、熱意、またはその他の感情的特性などの内部的な感情状態を推論するものではありません。当社の行動分析は、人が何を感じているかではなく、何をしているかを測定するため、感情認識とは根本的に異なります。
厳格な禁止
Vocaidのエンジニアリング基準は、感情推論を示唆するモデル出力ラベルを禁止しています。すべての行動分析出力は、事実の観察のみを記述していることを確認するためにレビューされます。EU AI法の感情認識禁止に違反した場合、最大3,500万ユーロまたはグローバル年間売上高の7%のペナルティが科せられます。
Vocaidは複数の法域のAI規制枠組みに準拠して運営しています。以下は、適用される各規制の具体的な義務と当社のコンプライアンス措置です。
VocaidはEU AI法(規則(EU) 2024/1689)の附属書III、カテゴリ4(雇用、労働者管理および自営業へのアクセス)に基づく高リスクAIシステムとして分類されています。AI搭載の採用評価ツールのプロバイダーとして、当社は第3章第2節の要件に従います。
適合性評価およびEUデータベース登録(第60条)は進行中であり、2026年8月2日の高リスク義務期限に合わせた完了を目標としています。欧州委員会は一部のプロバイダーについてこの期限を2027年12月2日まで延長する可能性があります。
Emotion Recognition
職場AIシステムにおける感情認識の禁止(第5条(1)(f))は2025年2月2日から施行されています。Vocaidは感情認識を使用したことはなく、この禁止の完全な遵守を確認します。
2026年2月1日から施行されるコロラドAI法は、高リスクAIシステムの開発者およびデプロイヤーに義務を課しています。AI採用評価ツールの開発者として、Vocaidは以下の義務を認識しています:
Vocaidのデプロイヤー(雇用主)はコロラドAI法の下で独立した義務を負っており、影響評価の実施と消費者通知の提供が含まれます。当社はプラットフォーム文書と透明性ツールを通じて、デプロイヤーが義務を果たすことを支援します。
2026年1月1日から施行されるテキサス責任あるAIガバナンス法は、テキサス州でAIシステムを展開する企業にAIガバナンスフレームワークの維持を求めています。Vocaidは以下の要件に準拠しています:
VocaidのAI面接およびスコアリングプラットフォームは、NYCローカル法144の下で自動雇用意思決定ツール(AEDT)を構成します。当社は以下の要件に準拠しています:
ブラジルのLei Geral de Proteção de Dados(LGPD)第20条の下、データ主体はプロファイリングを含む利益に影響を与える自動処理のみに基づく決定の人間によるレビューを要求する権利があります。Vocaidは以下を保証します:
メキシコの民間事業者が保有する個人データの保護に関する連邦法の2025年改正は、自動意思決定に新たな義務を導入しています。Vocaidは以下に準拠しています:
コロンビアのSuperintendencia de Industria y Comercio(SIC)は、生体データ処理要件に関するCircular 001/2025を発行しました。コロンビアにおけるVocaidの事業について:
Vocaidは、当社が事業を行うすべての法域において、AIシステムの責任ある開発、展開、監視に対応する包括的なAIガバナンスフレームワークを維持しています。
当社のガバナンス慣行
当社のAIガバナンスフレームワークは、規制環境とともに進化するよう設計されています。EU AI法、コロラドAI法、テキサス責任あるAIガバナンス法、およびその他の新興AI規制の新たな要件に合わせて、当社の慣行を定期的にレビューし更新しています。
Vocaid recognizes the following rights for all individuals who interact with our AI systems:
Right to Know
You have the right to know that AI is being used in your interview process and how it affects your assessment.
Right to Consent
You have the right to provide informed consent before biometric data (voice patterns, facial geometry) is processed.
Right to Opt Out
You have the right to opt out of video recording and behavioral analysis and use audio-only mode without penalty.
Right to Human Review
You have the right to request human review of any AI-generated score or assessment.
Right to Appeal
You have the right to contest AI-generated assessments and provide additional context.
Right to Explanation
You have the right to receive a meaningful explanation of how your scores were generated.
Right to Data Access
You have the right to access all data collected about you and all AI-generated outputs.
Right to Deletion
You have the right to request permanent deletion of your interview data, recordings, and AI outputs.
Right to Non-Discrimination
You have the right to fair treatment regardless of your race, gender, age, disability, accent, or other personal characteristics.
Right to Accommodations
You have the right to request reasonable accommodations for disabilities that may affect AI assessment accuracy.
In accordance with the European Union AI Act (Regulation 2024/1689), Vocaid provides the following technical documentation for our AI systems classified under the Act. Our systems are designed to comply with transparency obligations for AI systems that interact with natural persons.
System Information & Intended Purpose
Training Data & Model Information
Risk Management & Mitigation
Vocaid implements the following measures to mitigate risks associated with AI-assisted assessment:
For the complete technical documentation file or to submit questions regarding EU AI Act compliance, contact support@vocaid.ai. We are committed to full compliance with the Act's requirements as they enter into force.
New York City Local Law 144 (2021) requires employers and employment agencies that use automated employment decision tools (AEDTs) to conduct annual independent bias audits and provide notice to candidates. Vocaid is committed to compliance with LL144 requirements.
Applicability & Scope
Independent Bias Audit
Vocaid will commission an independent bias audit before deploying our AI scoring for B2B hiring decisions in NYC. The audit will evaluate:
Candidate Notice
In compliance with LL144, Vocaid provides the following notice to candidates evaluated for NYC employment opportunities: An automated employment decision tool (AEDT) will be used in connection with the assessment of your job application. The AEDT evaluates your interview responses for communication clarity, technical knowledge, problem-solving ability, and relevance. You may request an alternative selection process or accommodation by contacting the employer directly or by emailing support@vocaid.ai.
The bias audit summary will be published on this page once completed. Vocaid retains bias audit results for at least four years as required by law. For questions about LL144 compliance, contact support@vocaid.ai.
Vocaid uses the following third-party AI services. We maintain Data Processing Agreements with each provider to ensure your data is handled responsibly.
For questions about our AI practices, to exercise your rights, or to report a concern:
Email: support@vocaid.ai
We take all AI-related inquiries seriously and aim to respond within 15 business days.