online gambling singapore online gambling singapore online slot malaysia online slot malaysia mega888 malaysia slot gacor live casino malaysia online betting malaysia mega888 mega888 mega888 mega888 mega888 mega888 mega888 mega888 mega888 Why You Should Have an AI & Ethics Board

摘要: Guidelines are great -- but they need to be enforced. An ethics board is one way to ensure these principles are woven into product development and uses of internal data.


images/ethics.png

Jason Dixson via National Retail Federation

▲圖片標題(來源: onephoto via Adobe Stock )

Most businesses today have a great deal of data at their fingertips. They also have the tools to mine this information. But with this power comes responsibility. Before using data, technologists need to step back and evaluate the need. In today’s data-driven, virtual age, it's not a question of whether you have the information, but if you should use it and how.

Consider the Implications of Big Data

Artificial intelligence (AI) tools have revolutionized the processing of data, turning huge amounts of information into actionable insights. It’s tempting to believe that all data is good, and that AI makes it even better. Spreadsheets, graphs, and visualizations make data “real.” But as any good technologist knows, the old computing sentiment, “garbage in, garbage out” still applies. Now more than ever, organizations need to question where the data originates and how the algorithms interpret that data. Buried in all those graphs are potential ethical risks, biases and unintended consequences.

It’s easy to ask your technology partners to develop new features or capabilities, but as more and more businesses adopt machine learning (ML) operations and tools to streamline and inform processes, there is potential for bias. For instance, are the algorithms discriminating unknowingly against people of color or women? What is the source of the data? Is there permission to use the data? All these considerations need to be transparent and closely monitored.

Consider How Existing Law Applies to AI and ML

The first step in this journey is to develop data privacy guidelines. This includes, for example, policies and procedures that address considerations such as notice and transparency that data is used for AI, policies on how information is protected and kept up to date, and how sharing data with third parties is governed. These guidelines hopefully build on an existing overarching framework of data privacy.

Beyond privacy, other relevant bodies of law may impact your development and deployment of AI. For example, in the HR space, it is critical that you refer to federal, state, and local employment and anti-discrimination laws. Likewise, in the financial sector, there are a range of applicable rules and regulations that have to be taken into account. Existing law continues to apply, just as it does outside the AI context.

Staying Ahead While Using New Technologies

Beyond existing law, with the acceleration of technology, including AI and ML, the considerations become more complex. In particular, AI and ML introduce new opportunities to discern insights from data that were previously unachievable and can do so in many ways better than humans. But AI and ML are created ultimately by humans, and without careful oversight there are risks of introducing unwanted bias and outcomes. Creating an AI and Data Ethics Board can help businesses anticipate issues in these new technologies.

Begin by establishing guiding principles to govern the use of AI, ML and automation specifically in your company. The goal is to ensure that your models are relevant and functional, and do not “drift” from their intended goal unknowingly or inappropriately. Consider these five guidelines:

1. Accountability and transparency. Conduct audit and risk assessments to test your models, and actively monitor and improve your models and systems to ensure that changes in the underlying data or model conditions do not inappropriately affect the desired results.

2. Privacy by design. Ensure that your enterprise-wide approach incorporates privacy and data security into ML and associated data processing systems. For example, do your ML models seek to minimize access to identifiable information to ensure that you are using only the personal data you need to generate insights? Are you providing individuals with a reasonable opportunity to examine their own personal data and to update it if it’s inaccurate?

3. Clarity. Design AI solutions that are explainable and direct. Are your ML data discovery and data usage models designed with understanding as a key attribute, measured against an expressed desired outcome?

4. Data governance. Understanding how you use data and the sources from which you obtain it should be key to your AI and ML principles. Maintain processes and systems to track and manage data usage and retention. If you use external information in your models, such as government reports or industry terminologies, understand the processes and impact of that information in your models.

5. Ethical and practical use of data. Establish governance to provide guidance and oversight on the development of products, systems and applications that involve AI and data.

Principles like these can both guide discussion about these issues and help to create policies and procedures about how data is handled in your business. More broadly, they will set the tone for the entire organization.

Create an AI & Ethics Board

Guidelines are great -- but they need to be enforced to be effective. An AI and data ethics board is one way to ensure these principles are woven into product development and uses of internal data. But how can companies go about doing this?

Begin by bringing together an interdisciplinary team. Consider including both internal and external experts such as IT, product development, legal and compliance, privacy, security, audit, diversity and inclusion, industry analysts, external legal and/or an expert in consumer affairs, for instance. The more diverse and knowledgeable the team, the more effective your discussions can be around potential implications and the viability of different use cases.

Next, spend time discussing the larger issues. It’s important here to step away from process for a minute and immerse yourselves in live, productive discussion. What are your organization’s core values? How should they inform your policies around development and deployment of AI and ML? All this discussion sets the foundation for the procedures and processes you outline.

Setting a regular meeting cadence to review projects can be helpful as well. Again, the bigger issues should drive the discussion. For instance, most product developers will present the technical aspects -- such as how the data is protected or encrypted. The board’s role should aim to analyze the project on a more fundamental level. Some questions to consider for guiding discussion could be:

Do we have the rights to use the data in this way?

Should we be sharing this data at all?

What is the use case?

How does this serve our customers?

How does this serve our core business?

Is this in line with our values?

Could it result in any risks or harms?

Because AI and ethics has become an increasingly important issue, there are many resources to help your organization navigate these waters. Reach out to your vendors, consulting firms or trade groups and consortiums, like the Enterprise Data Management (EDM) Council. Implement the pieces that are appropriate for your business but remember that tools, checklists, processes, and procedures should not replace the value of the discussion.

The ultimate goal is to make these considerations a part of the company culture so that every employee that touches a project, works with a vendor or consults with a client, keeps data privacy front of mind.

轉貼自: InformationWeek

若喜歡本文,請關注我們的臉書 Please Like our Facebook Page:    Big Data In Finance


留下你的回應

以訪客張貼回應

0
  • 找不到回應
Powered by Komento

YOU MAY BE INTERESTED