scroll
Harnessing intelligence, guided by ethics
SUMMARY
The proliferation of Artificial Intelligence (AI) has transformed organizations, accelerating innovation and operational efficiency. At Boomitra, the responsible and ethical application of AI is core to our vision: accelerating global carbon sequestration and enabling sustainable agricultural transformation. This policy articulates Boomitra’s standards, principles, and governance structures to guarantee that AI is used safely, reliably, transparently, equitably, and in a manner that upholds our commitment to data integrity while being aligned with human rights and environmental safeguards.
PURPOSE
This policy ensures that every AI system or AI-enabled process developed, procured, or used by Boomitra meets minimum standards of safety, fairness, transparency, privacy, and accountability, and that AI use amplifies the positive social, environmental, and economic impacts for our farmer-partners and stakeholders.
SCOPE
This policy applies to:
Exemptions require written approval from the COO/CDO and must document compensating controls.
GUIDING PRINCIPLES
Boomitra commits to the following principles for all AI activities. These align with enterprise best practices and are tailored to our context of smallholder agriculture and carbon markets.
INTEGRATION WITH BOOMITRA’S POLICY ECOSYSTEM
This Policy will be read in conjunction with Boomitra’s Whistleblowers Policy, Employee Code of Conduct, Partner Ethics Policy and Code of Conduct, Confidential Information and Invention Assignment Agreement, Sexual Harassment Policy, Third Party Harassment Policy, Grievance Policy for Partners and Enrolled Farmers and Landowners, Employee Grievance Policy, Data Protection and Retention Policy, Quality Assurance Policy, Disaster Recovery Policy, Health, Safety and Environment Policy, Environmental, Social and Governance Policy, Diversity, Equity and Inclusion Policy, Modern Slavery Policy, and any other related policies that may be published in future.
GOVERNANCE STRUCTURE AND ROLES
Executive Oversight
Board-level oversight: The Board will receive an annual Responsible AI report summarizing high-risk AI use-cases, incidents, audit outcomes, and mitigation plans.
Executive Committee: The EXCOM will review and approve all Medium- and High-risk models. High-risk models will require Board notification.
AI Ethics and Oversight Council: This council will be led by the Chief Data & AI Officer (CDO). The COO will hold the role of CDO, serving as policy owner, approving classification of AI risk, maintaining the AI catalogue, and owning escalation to the Board. The council will consist of representatives from HR, Legal, Product, Field Operations, Sales, Business Development, Technology, and an external AI ethics advisor (whenever necessary). They shall be responsible for:
a) Vetting new AI initiatives before launch. b) Overseeing regular bias and impact audits. c) End-to-end lifecycle management from design, testing, deployment, monitoring, and incident response to documentation. d) Approving AI applications that have the potential for significant impact on individuals or the environment. e) Ensuring compliance with current and emerging regulations.
AI RISK CLASSIFICATION
Every AI initiative must be classified before resource allocation using the Boomitra AI Risk Matrix. Classification criteria include potential for physical harm, financial impact on farmers, legal/regulatory exposure, reputational risk, and reversibility.
High risk requires extensive validation, third-party auditing where appropriate (e.g., alignment with Verra/Social Carbon MRV rules), and higher insurance/contractual safeguards with partners.
DECISION AUTHORITY MATRIX
This layered approach creates a balance between speed and safety. A documented risk/benefit analysis and ethical impact analysis must be completed for all medium and high risk models as described below.
ACCEPTABLE AND UNACCEPTABLE USES
Acceptable Uses
Prohibited Uses
DATA GOVERNANCE AND FARMER CONSENT
Collect only the data necessary for a stated, documented purpose. Data collected for MRV, agronomy, and project administration must be scoped and recorded in a Data Use Register.
a) Informed Consent: Farmers must receive clear, local-language explanations of what data is collected, why, how it will be used, who will access it, and the implications (including on carbon rights and payments). Consent must be recorded and versioned.
b) Opt-out and Redress: Farmers can opt out of non-essential data collection; we must provide mechanisms to correct data, request deletion (subject to contractual and regulatory constraints), and appeal model-based decisions that affect payments.
a) Maintain provenance metadata for all datasets (source, collection date, collector, version). b) Label training data to note any systematic biases and apply corrective sampling or weighting strategies.
a) Enforce Role-Based Access Controls (RBAC) for all datasets. b) Sensitive location data and identities must be pseudonymized when used for model training or shared externally, unless explicit consent permits re-identification for audit/claims. c) Third-parties receiving data must sign data processing agreements that enforce Boomitra’s privacy, security, and benefit-sharing obligations.
a) Define retention periods per data category in the Data Retention Schedule. b) Older datasets required for carbon permanence or legal audit (e.g., MRV records) may be retained longer with enhanced protections and documented justification.
This farmer-centric consent and data governance approach is fundamental given our model of farmer-owned carbon rights and revenue-sharing commitments.
MODEL DEVELOPMENT LIFECYCLE (MDLC) REQUIREMENTS
All Boomitra models must follow the MDLC checklist below. Documentation for each step must be stored in the Model Registry.
a) Document the purpose, intended users, decision boundary, and expected benefits. b) Conduct a Socio-Environmental Impact Assessment (SEIA) for medium/high-risk models covering livelihood, gender, equity, land-use, and long-term carbon permanence considerations.
a) Ensure data quality checks, labeling guidelines, and offensive content filtering for imagery/metadata. b) Use techniques to mitigate sampling bias (stratified sampling across agroecological zones, gender, farm-size). c) Log dataset versions and any synthetic data creation.
a) Prefer interpretable models for high-impact decisions where possible; where black-box models are used, pair them with explainability techniques and rigorous human-in-the-loop controls. b) Maintain validation datasets that are geographically and temporally representative; withhold a test set for independent evaluation. c) Document performance metrics and threshold criteria for deployment, including uncertainty bounds in SOC estimates.
a) Medium-risk Models: EXCOM review and technical and socio-environmental sign-off. b) High-risk Models: Full EXCOM review, external audit (if required), legal sign-off on data/contractual implications, and field pilot with independent sampling.
a) All live decision systems must include escalation pathways to human experts. b) For agronomic recommendations: provide explainable advice with provenance (data and model version) and a recommended confidence score. Farmers must always have the final choice; Boomitra should avoid forced automation of farm management actions. c) For MRV and credit quantification: retain human auditors and sampling validation; automated estimates may be used for operational efficiency but final credit issuance must remain auditable and reversible where errors are detected.
a) Implement continuous monitoring for model drift, performance degradation, bias indicators, and fairness measures. b) Establish automated alerts for metric breaches and require the Model Owner to initiate remediation (retrain, rollback, or restrict scope). c) Field validation cadence: quarterly checks for pilot regions; annual for mature projects; ad-hoc if anomalies are detected.
a) Maintain decommissioning plans that include data archival, model governance handover, and notifications to impacted farmers and partners.
These MDLC stages follow best-practice assurance processes to ensure safe, auditable AI deployments.
THIRD-PARTY MODELS, LLMs, AND PROCUREMENT
a) Any procurement of third-party models, APIs, or LLM services must be routed through the CDO. b) Perform a vendor risk assessment covering data usage, model training practices, licensing/IP terms, capability to delete shared data, security posture, and compliance with local law.
a) No sensitive farmer-identifiable data, precise geolocation coordinates tied to individual owners, contract terms, or buyer pricing details shall be shared with public LLMs or consumer-grade AI tools (e.g., public chat interfaces) without explicit legal sign-off and documented consent. b) When public LLMs are used for prototyping or productivity, require pseudonymized, minimal examples and never reveal protected attributes.
a) Confirm training data provenance and licensing for any third-party model used in production. b) Avoid models where training data potentially infringes copyrighted material in ways that could create legal liability for Boomitra.
a) Ensure SLAs cover model availability, incident response, data breach notification, and rights to retrieve/delete data after contract termination.
This procurement discipline protects farmer data, corporate IP, and Boomitra’s legal exposure.
PRIVACY, SECURITY, AND IP
a) Embed privacy controls early: pseudonymization, data minimization, local-language consent flows, and Privacy Impact Assessments (PIAs) for applicable projects. b) AI development must comply with all applicable regulations regarding data privacy, including the General Data Protection Regulation (GDPR, EU), Indian Data Privacy Act, and sector-specific guidelines. c) Data Residency and Jurisdictional Security: Boomitra strictly adheres to local statutes governing the domestic storage and security of Personally Identifiable Information (PII) within its country of origin. Our data architecture is engineered to ensure that sensitive citizen data remains within national borders whenever mandated by regional law or sector-specific guidelines. d) All personal and sensitive data used for AI training or inference must be anonymized or pseudonymized unless explicit, auditable consent has been obtained.
a) Role-Based Access Controls (RBAC), encrypted data at rest and in transit, Multi-Factor Authentication (MFA) for sensitive systems, and regular penetration testing. b) Monitor model inference pipelines for data exfiltration or prompt injection attacks when integrating with LLMs. c) Employees must use only approved, secure data platforms for model development and must not transfer organizational, customer, or partner data to third-party AI tools without CDO approval.
a) Respect farmer-owned carbon rights: Boomitra will not unilaterally commercialize farm-level data outside agreed benefit-sharing arrangements. b) Ownership of models developed by Boomitra belongs to Boomitra, subject to contractual arrangements with implementation partners where co-development applies.
a) Align incident response with the Corporate Investor Relations Plan. Notify affected stakeholders (farmers, buyers, partners) in a timely, transparent manner; legal counsel to advise on regulatory notifications. b) Incident response protocols will be triggered for any suspected data breach involving AI systems, requiring immediate investigation and reporting to the CDO and relevant authorities.
Security, privacy, and IP protections are non-negotiable given our stewardship of farmer livelihoods and carbon assets.
FAIRNESS, INCLUSION, AND GENDER SENSITIVITY
a) For all medium/high-risk systems, assess differential impacts across gender, caste/status, farm-size, and marginalized groups. b) Use participatory design sessions with representative farmers and Field Compliance Liaisons to detect harms early.
a) Train models and UX to recognize and mitigate gender disparities (for example, many inputs historically collected only from male heads of household). Implement targeted data collection to close representation gaps.
a) Provide model outputs and explanations in local languages and audio formats where literacy is limited. UX must be field-tested for low-bandwidth contexts and non-smartphone users.
Inclusion work and equitable model design align deeply with Boomitra’s SDG targets and benefit-sharing commitments.
TRANSPARENCY AND COMMUNICATION
a) Produce a public-facing Model Factsheet for any AI that materially affects farmer income or buyer-facing MRV outputs. Factsheets will include intended use, limitations, performance metrics, data provenance summary, and a contact for concerns.
a) Provide concise, action-oriented explanations for recommendations and MRV outputs. Use visuals, audio, and community extension agents to deliver explanations and helplines for disputes.
a) When credit issuance is supported by automated estimates, clearly disclose the role of automation in credit quantification, the human oversight mechanisms in place, and the audit trail that supports the final credit claim.
Transparency is essential for trust with buyers, registries, and farmer partners and matches industry practice of publishing assurances and principles around AI.
HUMAN OVERSIGHT AND DECISION RIGHTS
Human oversight ensures we preserve farmer agency and remedy errors swiftly.
AUDITING, VERIFICATION, AND EXTERNAL ASSURANCE
CONTRACTS, BENEFIT-SHARING, AND LEGAL PROTECTIONS
Embedding AI clauses into contracts protects farmers and Boomitra’s market credibility.
REPORTING CONCERNS
ENFORCEMENT AND DISCIPLINARY PROCEDURES
EXCEPTIONS AND POLICY CHANGE CONTROL
Exceptions to this policy must be documented, time-bound, and approved by the CDO and the Board. The policy will be reviewed annually or sooner if legal or technological changes require it.
Boomitra is committed to continuous improvement in risk management, ensuring that our approach remains responsive to the evolving landscape of global carbon markets, community needs, and climate realities.
FREQUENTLY ASKED QUESTIONS
Q: Will Boomitra stop using AI in farmer programs? A: No. We will continue to use AI to enhance farmer outcomes but with human oversight, clear consent, and auditability.
Q: Can farmers see how AI decided their carbon estimate? A: Farmers will receive explanations suitable for their context and can request an independent audit of the sampling and MRV pipeline.
Q: What happens if a model makes an error that affects payments? Boomitra will investigate via the appeal process, reverse incorrect payments where applicable, and provide remediation consistent with contract terms and insurance arrangements.