AI Safety and Ethics Policy Statement

CoachEm follows a people-first AI strategy. While AI can certainly reduce costs by automating routine work, ‘human-in-the-loop’ applications of AI have major impacts on productivity by amplifying human capabilities. We put people first in how we leverage AI and believe this method holds the biggest potential for value creation when it comes to coaching and development.

Safety Policy

The CoachEm™ platform, and the AI features within it, is designed with safety as a top priority. We strive to follow industry best practices and guidelines for developing AI systems, including robust testing, validation, and quality assurance measures to minimize the risk of harmful outcomes.

We continually monitor and evaluate the CoachEm platform for potential safety concerns and take prompt action to address any identified issues. We are committed to conducting ongoing research to enhance the safety of the platform’s AI functionality and proactively mitigate any potential risks.

Our AI technology stack is in-house and is not based upon external APIs, which are updated outside of our direct control. Keeping our AI technology stack in-house, helps us prevent new and problematic emergent behaviors caused by other organizations’ AI updates from impacting our technology.

Ethics Policy

The CoachEm platform is designed and maintained in line with a strong ethical framework, guided by principles of fairness, transparency, and accountability. We are committed to upholding ethical standards and complying with all relevant laws and regulations governing the use of AI technology in our products and services.

The CoachEm platform, and our training on how to use it, is designed to encourage customers to use this AI-enabled technology for ethical purposes. The CoachEm platform is designed to inform and support actions taken by people, and the AI components are designed to help end users avoid activities that may cause harm, violate privacy rights, or perpetuate discrimination, bias, or prejudice.

We prioritize transparency around CoachEm’s approach to implementing this policy. This policy is meant to provide a clear explanation of our goals for the use of AI and its potential impact on users, customers, and society at large.

Non-Bias Policy

We strive to build and maintain an AI product that is free from bias and discrimination to the maximum extent possible. We take proactive measures to reduce or eliminate any biases that may arise from data collection, model training, or system deployment.

We conduct regular audits of the AI components of the CoachEm platform to identify and address any biases that may emerge. These audits are led by Ryan Baker, a professor at the University of Pennsylvania. Dr. Baker is one of the world’s leading authorities on algorithmic bias within applications designed for education and training. We also provide guidelines and instructions to our users to make recommendations about how users can avoid biased usage of the CoachEm platform, recognizing that responsible use of the AI functionality of the CoachEm platform also rests in part on our users and customers.

We strive to promote diversity and inclusivity in the development and deployment of the CoachEm platform, with the ultimate goal of ensuring that our technology is beneficial to all users, regardless of race, gender, religion, age, sexual orientation, disability, or any other salient individual differences.

Conclusion

In conclusion, the CoachEm platform is designed with a strong commitment to safety, ethics, and non-bias. We continually strive to improve our technology and policies to ensure that the use of AI through our products is a positive force in the advancement of society, while upholding the highest standards of responsibility and integrity in the development and use of the CoachEm platform.

We recognize that AI, and the legal and societal responses to AI, are rapidly changing. We endeavor to stay abreast of changes in this area and recognize that we may need to make changes to our approach and this policy over time.