Ethical application and development of AI at Fathom

This policy was last update dJune 1, 2024.

You may also be interested in:

At Fathom, we are committed to the ethical development and application of Artificial Intelligence (AI) for the betterment of human wellbeing. Our policy is rooted in principles of transparency; privacy; human wellbeing; justice, equity, diversity, and inclusion (JEDI); and continuous learning and improvement.

Transparency, Control  and Human-in-the-Loop Approach:

Transparency and human-control is a core tenet of our AI development process. We ensure transparency throughout all stages of development, from data collection to model deployment, making our AI systems understandable and accountable to both developers and end-users. 

Our goal is to empower humans to leverage the capabilities of AI while retaining control and agency over decision-making processes.

This approach not only enhances the reliability and safety of our AI systems but also ensures that they align with human values and objectives.


Privacy and Security:

We uphold rigorous data protection standards, ensuring individuals have control over their personal data and that it is used responsibly and transparently in AI development. 

We recognize the most breaches are human, and therefore build a culture of privacy and security at every level of the organization. We embrace the principle of least privilege across all our IT structures, ensuring that only those who need access to data have privileges to access the data.

Fathom complies with up to date guidelines and regulation guiding the development and application of AI in the United States and World Wide. Specifically, we ensure compliance with Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence and the Blueprint for an AI Bill of Rights in the United States; the Artificial Intelligence and Data Act in Canada; and take guidance from  the OECD AI Principles, the United Nations AI Advisory Board.

Justice, Equity, Diversity, and Inclusion (JEDI):

We integrate a JEDI lens into all aspects of our operations, fostering a diverse team reflective of the communities we serve and incorporating principles of justice, diversity, equity, and inclusion into our decision-making processes, product development, and organizational culture.

This extends into the design, development and application of our AI products. Through diversity in our team, in our testing community, in our data sets we ensure we build products that reflect the needs of the diverse communities we serve. 

Bias:

We mitigate bias in AI systems by continuously monitoring and evaluating for biases at all stages of development, striving to create fair and equitable AI systems for all users. We recognize that most LLMs developed and widely available have inherent Western biases, reflecting racist and sexist beliefs, because of the data they are trained on. 

We address this with a human in the loop system, that gives the user transparency into the coding of their data, and control to amend it when the AI reflects bias.

We also build and train our own models with data that has been accurately coded by humans, from broad and diverse respondent groups, and with an eye towards addressing bias.  

Human Wellbeing:

We prioritize human flourishing as defined by the Harvard Human Flourishing framework, striving to build AI that enables humans to be creative and strategic in their work, connected to one another, and free of mundane repetitive tasks or tasks that are harmful to their wellbeing.  We believe humans can be assisted by AI applications and that a key aspect of human wellbeing is choice and control - so build that into all aspects of our AI products.

We seek to maximize the benefit of AI applications for the greatest number of people, while minimizing harm, with a focus on especially minimizing harm to groups of people who are traditionally and currently marginalized or disadvantaged in our culture and broader systems of power..

Our AI strategy is aligned with our Impact Strategy:

1. Centering Voices of Impacted Communities: by making it possible for organizations and decision makers to understand communities at scale in their own words, we put the authentic voices and experiences of those impacted by decisions at the center of research about decisions that impact them. We believe that this is almost always a good thing. And we mitigate potential harm and exploitation of this capability by defining in our terms of use and enforcing  that our technology cannot be used to actively undermine a more fair and just world according to a set of specific definitions.

2. Developing Empathetic Leadership: We foster empathy among decision-makers and message creators, promoting a deeper understanding of the impact of their strategies, decisions, and messages on the communities they serve.  When leaders are able to ‘hear’ from communities, they are more empathetic to and therefore make decisions that better serve those communities, than when those communities are simply figures.

3. Transforming the Research Sector: Participating in the transformation of the research sector to adopt scaled qualitative research as essential to good decision making, not optional or additive. We believe that qualitative data has the power to unlock possibilities that are otherwise unknown, and leads to more empathetic decision making, and ultimately to greater fairness and justice. For this reason, we believe that transforming the perception of qualitative data in survey and other scaled applications from optional to essential, is a powerful impact goal in and of itself. 

Continuous Improvement and Contribution to Best Practices:

We are committed to continuous improvement and actively engage in learning and contributing to best practices and frameworks within our industry. By staying abreast of the latest developments and advancements in AI ethics, we strive to uphold the highest standards of ethical conduct in our AI development endeavours.

Our dedication to continuous improvement extends beyond our internal processes to encompass broader industry practices and standards. We actively participate in collaborative efforts to establish and refine ethical guidelines and frameworks for the responsible development and deployment of AI systems.

Contacting us

If you would like to contact us to understand more about this Policy or wish to contact us concerning any matter relating to individual rights and your Personal Information, you may do so via the Contact page.