online gambling singapore online gambling singapore online slot malaysia online slot malaysia mega888 malaysia slot gacor live casino malaysia online betting malaysia mega888 mega888 mega888 mega888 mega888 mega888 mega888 mega888 mega888 AI Weekly: The road to ethical adoption of AI

摘要: As new principles emerge to guide the development ethical, safe, and inclusive AI, the industry faces self-inflicted challenges.

 


images/GettyImages-1233936320-e1628785375195.jpeg

▲圖片標題(來源:images/GettyImages-1233936320-e1628785375195.jpeg)

As new principles emerge to guide the development ethical, safe, and inclusive AI, the industry faces self-inflicted challenges. Increasingly, there are many sets of guidelines — the Organization for Economic Cooperation and Development’s AI repository alone hosts more than 100 documents — that are vague and high-level. And while a number of tools are available, most come without actionable guidance on how to use, customize, and troubleshoot them.

This is cause for alarm, because as the coauthors of a recent paper write, AI’s impacts are hard to assess — especially when they have second- and third-order effects. Ethics discussions tend to focus on futuristic scenarios that may not come to pass and unrealistic generalizations that make the conversations untenable. In particular, companies run the risk of engaging in “ethics shopping,” “ethics washing,” or “ethics shirking,” in which they ameliorate their position with customers to build trust while minimizing accountability.

The points are salient in light of efforts by European Commission’s High-level Expert Group on AI (HLEG) and the U.S. National Institute of Standards and Technology, among others, to create standards for building “trustworthy AI.” In a paper, digital ethics researcher Mark Ryan argues that AI isn’t the type of thing that has the capacity to be trustworthy because the category of “trust” simply doesn’t apply to AI. In fact, AI can’t have the capacity to be trusted as long as it can’t be held responsible for its actions, he argues.

“Trust is separate from risk analysis that is solely based on predictions based on past behavior,” he explains. “While reliability and past experience may be used to develop, confer, or reject trust placed in the trustee, it is not the sole or defining characteristic of trust. Though we may trust people that we rely on, it is not presupposed that we do.”

Responsible adoption

Productizing AI responsibly means different things to different companies. For some, “responsible” implies adopting AI in a manner that’s ethical, transparent, and accountable. For others, it means ensuring that their use of AI remains consistent with laws, regulations, norms, customer expectations, and organizational values. In any case, “responsible AI” promises to guard against the use of biased data or algorithms, providing an assurance that automated decisions are justified and explainable — at least in theory.

Recognizing this, organizations must overcome a misalignment of incentives, disciplinary divides, distributions of responsibilities, and other blockers in responsibly adopting AI. It requires an impact assessment framework that’s not only broad, flexible, iterative, possible to operationalize, and guided, but highly participatory as well, according to the paper’s coauthors. They emphasize the need to shy away from anticipating impacts that are assumed to be important and become more deliberate in deployment choices. As a way of normalizing the practice, the coauthors advocate for including these ideas in documentation the same way that topics like privacy and bias are currently covered.

Another paper — this from researchers at the Data & Society Research Institute and Princeton — posits “algorithmic impact assessments” as a tool to help AI designers analyze the benefits and potential pitfalls of algorithmic systems. Impact assessments can address the issues of transparency, fairness, and accountability by providing guardrails and accountability forums that can compel developers to make changes to AI systems.

This is easier said than done, of course. Algorithmic impact assessments focus on the effects of AI decision-making, which doesn’t necessarily measure harms and may even obscure them — real harms can be difficult to quantify. But if the assessments are implemented with accountability measures, they can perhaps foster technology that respects — rather than erodes — dignity.

As Montreal AI ethics researcher Abhishek Gupta recently wrote in a column: “Design decisions for AI systems involve value judgements and optimization choices. Some relate to technical considerations like latency and accuracy, others relate to business metrics. But each require careful consideration as they have consequences in the final outcome from the system. To be clear, not everything has to translate into a tradeoff. There are often smart reformulations of a problem so that you can meet the needs of your users and customers while also satisfying internal business considerations.”

轉貼自: venturebeat.com

若喜歡本文,請關注我們的臉書 Please Like our Facebook Page:    Big Data In Finance

 


留下你的回應

以訪客張貼回應

0
  • 找不到回應

YOU MAY BE INTERESTED