Metadata MCP Overview and FAQ
Table of Contents
Overview
Metadata employs AI and ML services to deliver impactful features, powerful insights, and enhanced user experience. Metadata's AI features are carefully orchestrated to give our customers control over their data and the level of AI engagement.
AI Implementation and Features
What AI systems and technologies does Metadata currently implement?
Metadata has implemented a Model Context Protocol (MCP) server (MetadataONE) that bridges Large Language Models like ChatGPT and Claude with the Metadata platform. This enables users to access platform functionalities through conversational AI interfaces.
The MCP server allows users to:
- Analyze CRM, marketing automation, and advertising data across multiple campaigns and channels
- Build audiences
- Deploy campaigns
- Create assets
- Optimize budgets directly from LLM chat interfaces without switching between platforms
The system includes enterprise-grade security controls and allows users to control the features and workflows they'd like to enable from the conversational AI chat they integrate with Metadata's MCP server.
Autonomous AI Agent Ecosystem
Metadata utilizes multiple specialized AI agents to automate the execution of paid advertising campaigns across the entire marketing workflow:
Metadata MCP
The AI assistant that orchestrates core Metadata workflows using the tools made available by Metadata's MCP server.
These agents collectively handle targeting, bidding, creative testing, budget optimization, and reporting across all major advertising channels including LinkedIn, Google, Facebook, Instagram, Reddit, and X. The agents operate continuously, making thousands of micro-optimizations that would be impossible to execute manually.
Bid Agent
Operates autonomously to optimize bidding strategies across campaigns. Rather than optimizing solely for traditional metrics like CPL, the Bid Agent continuously adjusts bids based on revenue optimization, connecting advertising performance directly to closed won deals and pipeline value.
Creative Agent
Available through the Metadata MCP server, this agent automates the generation and optimization of campaign assets by analyzing brand guidelines, ICP personas, and performance data to produce ad copy, headlines, creative assets, and offers. The agent can generate multiple variations for A/B testing and iteratively refine creative based on engagement and conversion signals.
Analyst Agent
Available through the Metadata MCP server, this agent provides AI-powered performance insights and analysis across campaigns and channels. It surfaces actionable recommendations, identifies optimization opportunities, and helps marketers understand what's driving results without manual data analysis across multiple dashboards.
What types of data are processed through your AI systems?
Metadata's AI systems process B2B marketing and advertising data, including:
- Campaign performance metrics (impressions, clicks, conversions, spend, leads, MQLs)
- Pipeline and revenue data (opportunities, closed won deals, ROI)
- Audience targeting configurations (firmographic, technographic, job level, geographic criteria)
- Creative assets (ad copy, headlines, images, CTAs)
- Campaign settings (budget groups, optimization parameters, channel configurations)
- Offer data (lead generation forms, landing pages)
Through the Metadata MCP connector, Large Language Models can query and analyze this data, build audiences, create campaigns, and generate creative assets via conversational interfaces. The system processes structured marketing data from multiple advertising channels (LinkedIn, Facebook, Instagram, Google Ads, Reddit, X) and connects to CRM systems for revenue attribution.
Important: No personally identifiable information, such as individual email addresses or phone numbers, is exposed to external LLM systems through the MCP interface.
Data Protection & Privacy Controls
Controlled Data Exposure to LLMs
Choosing to use Metadata AI will lead to controlled exposure of non-identifiable data to LLMs. This is by design and in compliance with Metadata's data protection and privacy policies.
- Account admins and super users have the ability to disable or enable mutable AI components from account settings in the Metadata platform
- Metadata only engages LLMs hosted by service providers through a commercial contract or subscription
- Metadata does NOT host LLMs internally at this time
- Metadata never shares Personally Identifiable Information (PII) with LLMs
- Metadata controls data sharing with LLMs on a need basis and prevents service providers from retaining or using the shared data for training purposes
External Client Integration
It is the customer's choice to integrate Metadata's MCP server with external MCP clients (like Claude.ai, ChatGPT, etc.).
Important considerations:
- Should a customer choose to integrate, they will be informed and prompted to accept responsibility for client-side data protection and the client's control over their Metadata account (by extension, the client's control over their ad accounts integrated to Metadata)
- Only account admins will be able to connect external clients
- The client's data protection configuration will determine how data is consumed, retained, processed, and used by the client
- Metadata will NOT have any control over configurations and settings of the external clients
- Disabling AI features in Metadata Platform will NOT automatically disconnect external clients
Is customer data used to train AI models?
No. Customer data from LinkedIn and other media sources is not used to train or fine-tune external AI models such as OpenAI or Anthropic. LinkedIn and media performance data is used exclusively for training internal bid optimization models, not for LLM training. Technical options exist to exclude certain campaigns from bid optimization model training.
Security and Governance
Do you have formal AI requirements in place?
Yes. Metadata maintains acceptable use requirements for Generative AI, LLMs, and GPT that establish comprehensive guidelines for responsible, ethical, and secure use of AI technologies.
Requirements emphasize five core principles:
- Responsible use
- Bias mitigation
- Accuracy and transparency
- Respect for privacy
- Compliance with laws and regulations
Requirements cover both general organizational use and specific guidelines for secure software development practices.
What measures are in place to prevent AI bias and ensure fairness?
Metadata has established bias mitigation as one of five core principles within its acceptable use requirements for Generative AI, Large Language Models, and GPT technologies.
Performance-Based Optimization
Metadata's AI-driven campaign optimization systems are designed around objective, performance-based metrics including Cost per Lead, revenue generation, and pipeline impact, rather than subjective criteria. This methodological focus significantly reduces the potential for discriminatory targeting practices.
Business-Focused Targeting Criteria
Metadata's audience targeting methodology relies on business-relevant firmographic criteria, including company size, industry classification, job function, and professional seniority level, rather than personal demographic characteristics. This approach aligns with both B2B marketing best practices and applicable regulatory requirements.
Transparency and Accountability Mechanisms
The MCP connector incorporates enterprise-grade controls, including permission-based access protocols and comprehensive audit trails, which enable meaningful oversight of AI decision-making processes. Metadata conducts regular monitoring of campaign performance across different audience segments to identify potential disparities in ad delivery or engagement patterns.
How do you protect sensitive data in AI processing?
Metadata implements comprehensive data protection measures:
- Data classification requirements with Confidential_PI classification for B2B personal information
- No PII is exposed during AI interactions
- Transient data handling for LLM interactions
- AWS hosting with multi-Availability Zone backup
- Data encryption using AES-256 with 256-bit keys
- Structured data storage in secure MySQL databases with geographic data residency controls
What encryption standards are applied to AI-related data?
Metadata applies AES-256 encryption with 256-bit keys across all systems handling AI-related data. The encryption requirements mandate encryption for data at rest and in transit, with encryption key material protected through privileged account access controls. PII data is hashed and encrypted with SHA-256 for secure transmission and storage.
Where is AI-processed data stored geographically?
AI-processed data is stored in AWS us-west-2 region as the primary location, with backup storage in AWS us-east-1 region. All data storage and processing occurs within the United States. The original data format is maintained in structured MySQL databases, with geographic data residency controls ensuring US-based processing.
How long is AI-processed data retained?
AI-generated data follows Metadata's standard data requirements. Only transient data is exposed to LLMs, with generated data becoming part of Metadata's data layer. Contextual data for conversation restoration is retained beyond AI sessions. External sub-processor Wordware maintains logs including prompts, extracted data, and LLM results for 2 years.
What access controls are in place for AI systems?
Metadata implements role-based access controls through the access control requirements, requiring unique user identification and strong authentication. The AI model cannot access data beyond user authorization. Access to AI systems is restricted to authorized users with appropriate business justification and approval processes.
Compliance and Risk Management
What compliance frameworks does your AI implementation follow?
Metadata's AI systems comply with SOC 2 and ISO 27001:2022 standards. The organization maintains a requirements framework that includes information security requirements, data protection requirements, and access control policies. AI implementations comply with GDPR, CPRA, and Swiss data protection requirements, incorporating appropriate privacy safeguards and data protection measures.
How do you conduct AI risk assessments?
Metadata conducts annual risk assessments using a structured four-part process:
- Asset identification
- Threat and vulnerability assessment
- Quantitative risk scoring
- Mandatory treatment requirements
AI systems are evaluated for confidentiality, integrity, and availability risks with documented Risk Treatment Plans for identified risks.
What incident response procedures exist for AI-related security events?
Metadata maintains an incident management procedure establishing a three-phase process for AI-related incidents: Identification, Assessment, and Response. The Security Response Team includes the CISO and appropriate personnel for immediate response and impact assessment. AI security incidents are reported to secops@metadata.io with comprehensive documentation and corrective action procedures.
Are there audit trails for AI system activities?
Yes. Metadata maintains comprehensive event logging for all systems handling confidential information, including AI systems. Logging captures data operations, authentication events, access control changes, and security events. All AI interactions and processing activities are logged with sufficient information to determine what activity was performed, who performed it, and when it occurred.
What third-party AI vendors do you work with and how are they managed?
Metadata works with OpenAI, Anthropic, Google, and Wordware as AI sub-processors. The vendor management requirements establish comprehensive requirements for third-party service providers including risk assessments, mandatory contract security requirements, and subprocessor management. Vendor relationships are managed through the Panorarys third-party risk management platform with continuous monitoring and compliance verification.
Technical Implementation
How do you ensure AI system availability and business continuity?
Metadata maintains AWS cloud infrastructure with multi-availability zone deployment for high availability. The operational resilience requirements and recovery procedures establish comprehensive procedures for maintaining AI system operations during disruptions. The system includes Recovery Time Objectives of 8-72 hours and Recovery Point Objectives of 2 hours to 7 days, depending on data protection type.
What vulnerability management practices apply to AI systems?
Metadata implements comprehensive vulnerability management requiring annual system scanning with structured remediation. AI systems undergo regular security assessments with service level agreements for vulnerability remediation ranging from 10 days for critical vulnerabilities to 90 days for low-severity issues. Third-party penetration testing is conducted annually by independent firms.
How are AI systems integrated with existing security controls?
AI systems integrate with Metadata's comprehensive security framework including the Information Security Management System and Privacy Information Management System. Integration includes identity and access controls through Auth0 (Okta), monitoring through LogZ, vulnerability management through multiple scanning tools, and compliance through the Drata platform requirements management and evidence collection.
What monitoring and alerting capabilities exist for AI systems?
Metadata implements continuous control monitoring through the LogZ platform for security analytics and system monitoring. AI systems are subject to comprehensive event logging, real-time monitoring for security violations and compliance breaches, and automated alerting for risk-triggered scenarios. The monitoring framework supports enterprise-level analysis and reporting with administrator accountability controls.
How do you handle AI system updates and changes?
AI system updates follow the development lifecycle requirements requiring security controls at every development stage, including OWASP compliance, secure coding practices, and separation of duties between development, testing, and production environments. Changes require formal approval through change management procedures with comprehensive testing, security validation, and documented rollback strategies before production deployment.
Learn more here
Comments
0 comments
Article is closed for comments.