Ethical use of AI: 5 major challenges

Originally published at: Ethical use of AI: five major challenges Nextcloud

AI has a lot of potential — to let us do things better and faster, but also to cause a great harm to our privacy, creativity and perhaps even our mental well-being. We believe most of it is yet to be discovered, but today we need to figure out how to make it work for us, and not against us (or others).

And this is why we need AI ethics. At Nextcloud, we care about privacy and transparency, and believe that Ethical use of AI tools in both commercial and personal setting is essential.

In this article, we delve into five major challenges confronting organizations in their quest for ethical AI adoption: issues with major providers of AI tools, transparency concerns, regulatory compliance, data sovereignty challenges, and the dilemma of the single-vendor ecosystems. By exploring these challenges in a certain depth, we will try to derive knowledge and insights needed to navigate some ethical complexities of AI adoption to establish a safer and more sustainable approach to business.

Ethical AI in Nextcloud

Nextcloud Hub is an AI-powered collaboration platform that offers a freedom of choice when it comes to AI hosting, sourcing an appropriate model and choosing the right approach to AI integration.

Nextcloud makes and ongoing effort to promote the ethical use of AI tools. To assist our users, we employ our Ethical AI Rating to help them choose the right match for their limitations and principles of their business.

Nextcloud Ethical AI

Big tech and ethical use of AI

Biggest tech giants like Google and Facebook claim to scrutinize their development and use of AI, addressing issues such as bias, privacy, and accountability. Creation of special research boards, drafting ethical guidelines, and participation in various forums and high-profile collaborations help secure a position of the opinion leader and ambassador of the AI Ethics.

Those initiatives also serve as a differentiator on a competitive market and help companies improve their public image where consumer trust means everything. And unfortunately, Ethical AI adoption turns out to be a simple windowdressing where profit motives prevail over the ethical pursuits as we see big AI providers terminate their ethical teams amid growing AI product investments: 

  • In early 2023, Microsoft terminated its entire AI Ethics and Society team as a part of a massive layoff, despite leading the market in the development of mass-access AI products and actively promoting ethical AI principles.
  • In 2018, Google published its AI principles, committing to ethical AI development. However, in the following years the company faces a backlash over a controversial Project Maven to develop military AI for Pentagon, fires its leading AI researcher Timnit Gebru, splits up its Responsible Innovation team, and ultimately faces an internal dissent from the employees over the implementation of these principles.
  • In 2023, Meta disbanded its Responsible AI (RAI) team while investing even more resources into generative artificial intelligence products.

Responsible innovation efforts and dedicated ethical teams seem like a veneer of ethical responsibility while the deeper, systemic issues inherent in AI deployment remain unchallenged.

Data collection and transparency issues

One of these challenges is with transparency of AI training practices.

Even though companies are legally required to inform users about how their data is processed, some AI providers like Meta start collecting vast amounts of data from user content, while the policy is very hard to opt out of. And this is not the only example of how tech giants cut corners to harvest data for AI training when running out of supply.

For example, in 2021, Open AI reportedly transcribed over one million hours of YouTube videos to feed data to ChatGPT. While according to two members of the privacy team at Google, in 2022 company wanted to expand the use of consumer data for AI training, including publicly available content in Google Docs, Google Sheets and related apps.

Ethical AI certification standards and regulations

Data privacy regulations play a crucial role in governing the use of AI technologies, ensuring that individuals’ privacy rights are protected and that AI-powered applications are used responsibly. From a perspective of a company employing AI tools in their business, compliance with such regulations is essential.

Data privacy regulations and the use of AI

In the European Union, such regulation is provided by the General Data protection Regulation (GDPR) and the AI Act, an embodiment of the common regulatory framework for AI. In the US, the National Institute of Standards and Technology issued the Risk Management Framework (AI RMF), which is a guidance to companies and other entities on using, designing and deploying AI. However, this framework is voluntary and does not call for  penalties for noncompliance.

In some regions, legislation also controls collection and processing of specific types of data that companies can collect unconsciously, for example protected health infiormation (PHI) and biometric data that may be collected through various AI-powered health apps. In the US, Health Insurance Portability and Accountability Act (HIPAA) and the Illinois Biometric Information Privacy Act (BIPA) aim to regulate collection of health data.

Ethical AI certification standards

Ethical AI certification standards are designed to ensure that the development and deployment of AI technologies align with societal values, promote trust, and mitigate potential harms. While noncompliance does not lead to any legal penalties, companies still face certain risks, such as reputational damage and affected stakeholder relations. Examples include IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems, EU Ethics Guidelines for Trustworthy AI, ISO/IEC JTC 1/SC 42, AECP, RAIL, and more.

How to minimize the risks of noncompliance

Regulations vary by region and industry, and the first essential step is to research data protection laws relevant to your business. Noncompliance, even if unintentional, may lead to serious consequences including both fines and the reputational damage. While those regulations enforce various policies, there are common principles to bear in mind that can help minimize risks:

  • Obtain explicit consent before collecting data, informing users of what data is collected and how it is used and shared. Make sure to provide straightforward ways to opt out.
  • Make your data processing and storage protocols accessible to ensure compliance and ease potential user concerns.
  • Use models that provide transparent information about training data and algorithms used for collecting and processing.
  • Store user data and run your AI locally to avoid conflicts with your local regulations. If local hosting is not possible, vet third-party providers for their compliance with data protection regulations. Continuously monitor their practices to ensure ongoing compliance.
  • Train employees on data protection regulations and company policies. Promote a culture of privacy and security awareness within the organization.
  • Regularly update and patch your systems to mitigate vulnerabilities.

Nextcloud compliance kit

We provide our customers with direct consultation services and multiple resources to support their compliance efforts. This includes a high-level 12-step checklist offering an overview of key compliance requirements and a detailed administrator manual providing concrete, hands-on guidance for implementing compliance measures effectively.

Nextcloud compliance

Data locality and data sovereignty

The algorithms and data usage policies used by public AI services lack transparency, which makes it difficult for organizations to ensure ethical AI practices. Besides, when employees use publicly available cloud-based AI tools as a means of work, it may cause compliance issues and privacy risks related to data location:

  • Using third-party AI services may raise compliance issues with data protection regulations (e.g. GDPR, CCPA) and industry-specific standards (e.g. HIPAA, PCI DSS).
  • Public AI services may store data in jurisdictions that do not comply with the company’s data sovereignty requirements, leading to legal and regulatory risks. It can work the other way round as well: for example, the US CLOUD Act allows US law enforcement to compel US-based technology companies to disclose electronic communications, regardless of where the data is stored.

Hosting AI tools and their data locally is crucial for ensuring robust data privacy and compliance. While cloud-based solutions offer flexibility and scalability, local hosting provides the control and assurance necessary for handling sensitive or regulated data effectively.

AI vendor lock-in and strategic risks to digital autonomy

AI services provided by the big tech vendors are often highly integrated with their products, providing smoother operations and better overall performance on the user side. However, the risks of being locked in the mono-provider ecosystem include little flexibility as to what tools to employ and major dependence on the vendor’s decision making and product management. That also creates strategic risks, not only for the companies, but society as a whole.

After Danish privacy regulator ruled against sharing student’s data with Google, the company reportedly promised to change the way they process data to continue supplying Google products to Denmark’s schools. That means that the schools can avoid short-term disruptions, and also save funds as companies like Google and Microsoft make their products accessible for educational organizations.

However, in the long term, by providing kids with their proprietary technology companies get a strong grip over their future choices — with kids habituated to using the products and sharing their data since childhood.

Similarly, it is important to act strategically when adopting the AI in the daily life — a negative change can happen gradually and go unnoticed until it’s very late. While today students’ data isn’t used to train the foreign big tech AI en masse, tomorrow it might be, given the steady growth of AI popularity. As kid, you will leave school and keep using Google, because it has already trained a personal AI for you and any other product will be hard to adopt.

Vendor-independence should be a part of the long-term strategy. European organizations are better off using European AI that is part of a sovereign ecosystem, hosted in Europe and trained on a local data.

Regain control of your data with Nextcloud Hub

Nextcloud Hub is the most popular self-hosted collaboration platform that integrates file sharing, document collaboration, groupware, and videoconferencing tools in a single modular interface. It is secure and private by design and gives you ultimate control over your data and ensures maximum compliance.

Realizing the big potential AI has for our daily life and work, we powered Nextcloud Hub with AI features that give you performance, but also let you care about your privacy. It features the Nextcloud Assistant, an AI-powered interface that enhances your entire platform with versatile automation features, augmented communication and content creation instruments. You build your AI stack the way you want it, with multiple apps, models and deployment formats available.

Overview of the new features in the latest version of Nextcloud Hub

Try Nextcloud Hub

Discover Nextcloud Hub, privacy-first open-source solution
for business collaboration that puts you in a driver’s seat.

Try now

This topic was automatically closed 24 hours after the last reply. New replies are no longer allowed.