ETFOptimize | High-performance ETF-based Investment Strategies

Quantitative strategies, Wall Street-caliber research, and insightful market analysis since 1998.


ETFOptimize | HOME
Close Window

NTT Research PHI Lab Scientists Address Bias in AI

ⓘ This article is third-party content and does not represent the views of this site. We make no guarantees regarding its accuracy or completeness.

Proposed Algorithm Overcomes Limits of Neural Network Fine-Tuning

NTT Research, Inc., a division of NTT (TYO:9432), today announced that scientists affiliated with its Physics & Informatics (PHI) Lab have co-authored a paper that proposes a way to overcome bias in deep neural networks (DNNs). A type of artificial intelligence (AI), DNNs have become pervasive in science, engineering and business, and even in popular applications, but they sometimes rely on spurious attributes that may convey bias. In a paper presented at the International Conference on Machine Learning (ICML), July 23-28, 2023, PHI Lab Research Intern and graduate student at the University of Michigan Ekdeep Singh Lubana, PHI Lab Research Scientist and Associate at the Harvard University Center for Brain Science Hidenori Tanaka and three other scientists proposed overcoming the limitations of naive fine-tuning, the status quo method of reducing a DNN’s errors or “loss,” with a new algorithm that reduces a model’s reliance on bias-prone attributes. The ICML is one of the three primary conferences on ML and AI, according to Google Scholar. The other authors of the paper, titled “Mechanistic Mode Connectivity,” are Eric Bigelow, a graduate student in psychology at the Harvard Center for Brain Science; Robert Dick, Associate Professor in the College of Engineering at the University of Michigan; and David Krueger, Assistant Professor at Cambridge University, and member of its Computational and Biological Learning Lab.

While DNN-driven technologies, especially generative AI, have become popular, how they work is less well known. The authors of this paper focus on the DNN loss landscape. More specifically, they consider how the loss-minimizing functions that rely on different mechanisms for making predictions are connected. A DNN, for instance, which classifies images such as a fish (an illustration used in this study) can use both the object shape and background as input parameters for prediction. Its loss-minimizing paths would therefore operate in mechanistically dissimilar modes: one relying on the legitimate attribute of shape, and the other on the spurious attribute of background color. As such, these modes would lack linear connectivity, or a simple path of low loss. Naïve fine-tuning is unable to fundamentally alter the decision-making mechanism of a model as it requires moving to a different valley on the loss landscape. Instead, you need to drive the model over the barriers separating the “sinks” or “valleys” of low loss. The authors call this corrective algorithm Connectivity-Based Fine-Tuning (CBFT).

“Naïve fine-tuning is just exploring the valley. Changing the mechanism requires you to go over the mountain, and this is what CBFT does,” Dr. Tanaka said. “The end result is to help DNNs, whether they’re classifying a fish or doing something else, like setting credit limits, to rely on object shape or actual credit worthiness or other legitimate attributes, rather than spurious attributes, such as background color or gender.”

Set up to rethink existing computational models and deliver real-world breakthroughs, the PHI Lab is focused on linear optics, quantum-related computing and neural networks. This paper represents the PHI Lab’s work in neural networks, which it is approaching from several angles. At the biological end of the spectrum is a paper presented at the Neural Information Processing Systems (NeurIPS) 2019 conference that advanced basic understanding of neural networks in the brain. (See previous press release.) A revised version of that has just been published in the latest edition of Neuron, one of the most influential journals in the field. An example of the PHI Lab’s theoretical approach is a paper on how self-supervised learning systems fail, which was presented earlier this year at the International Conference on Learning Representations (ICLR 2023). The ICML paper discussed in this press release represents the PHI Lab’s interest in questions with societal and practical impact, such as bias. It also is part of a new PHI Lab initiative regarding responsible AI.

“Our PHI Lab scientists and their colleagues have enlarged our understanding of the relatively neglected field of neural network fine-tuning in this notable paper and have proposed an innovative remedy for correcting against bias,” PHI Lab Director Yoshihisa Yamamoto said. “This is a first in what we hope will be a series of contributions aligned with our vision of a ‘Physics of Intelligence for Trustable AI.’”

The PHI Lab is engaged in a productive and extensive research agenda. In a recent three-month span, top academic journals accepted or published a dozen papers co-authored by PHI Lab scientists. The PHI Lab also has reached joint research agreements with numerous institutions, Harvard University being the most recent. Other agreements currently in place include those with the California Institute of Technology (Caltech), Cornell University, Massachusetts Institute of Technology (MIT), the NASA Ames Research Center in Silicon Valley, Notre Dame University, Stanford University, Swinburne University of Technology, the University of Michigan and the Tokyo Institute of Technology.

About NTT Research

NTT Research opened its offices in July 2019 as a new Silicon Valley startup to conduct basic research and advance technologies that promote positive change for humankind. Currently, three labs are housed at NTT Research facilities in Sunnyvale: the Physics and Informatics (PHI) Lab, the Cryptography and Information Security (CIS) Lab and the Medical and Health Informatics (MEI) Lab. The organization aims to upgrade reality in three areas: 1) quantum information, neuroscience and photonics; 2) cryptographic and information security; and 3) medical and health informatics. NTT Research is part of NTT, a global technology and business solutions provider with an annual R&D budget of $3.6 billion.

NTT and the NTT logo are registered trademarks or trademarks of NIPPON TELEGRAPH AND TELEPHONE CORPORATION and/or its affiliates. All other referenced product names are trademarks of their respective owners. © 2023 NIPPON TELEGRAPH AND TELEPHONE CORPORATION

As #AI use in science and business increases, the need grows to prevent bias and improve accuracy. @NttResearch developed an algorithm that could improve how #AI models detect legitimate attributes for making predictions and reduces bias. #UpgradeReality

Contacts

Report this content

If you believe this article contains misleading, harmful, or spam content, please let us know.

Report this article

Recent Quotes

View More
Symbol Price Change (%)
AMZN  248.28
+0.00 (0.00%)
AAPL  273.05
+0.00 (0.00%)
AMD  274.95
+0.00 (0.00%)
BAC  53.95
+0.00 (0.00%)
GOOG  335.40
+0.00 (0.00%)
META  670.91
+0.00 (0.00%)
MSFT  418.07
+0.00 (0.00%)
NVDA  202.06
+0.00 (0.00%)
ORCL  177.58
+0.00 (0.00%)
TSLA  392.50
+0.00 (0.00%)
Stock Quote API & Stock News API supplied by www.cloudquote.io
Quotes delayed at least 20 minutes.
By accessing this page, you agree to the Privacy Policy and Terms Of Service.


 

IntelligentValue Home
Close Window

DISCLAIMER

All content herein is issued solely for informational purposes and is not to be construed as an offer to sell or the solicitation of an offer to buy, nor should it be interpreted as a recommendation to buy, hold or sell (short or otherwise) any security.  All opinions, analyses, and information included herein are based on sources believed to be reliable, but no representation or warranty of any kind, expressed or implied, is made including but not limited to any representation or warranty concerning accuracy, completeness, correctness, timeliness or appropriateness. We undertake no obligation to update such opinions, analysis or information. You should independently verify all information contained on this website. Some information is based on analysis of past performance or hypothetical performance results, which have inherent limitations. We make no representation that any particular equity or strategy will or is likely to achieve profits or losses similar to those shown. Shareholders, employees, writers, contractors, and affiliates associated with ETFOptimize.com may have ownership positions in the securities that are mentioned. If you are not sure if ETFs, algorithmic investing, or a particular investment is right for you, you are urged to consult with a Registered Investment Advisor (RIA). Neither this website nor anyone associated with producing its content are Registered Investment Advisors, and no attempt is made herein to substitute for personalized, professional investment advice. Neither ETFOptimize.com, Global Alpha Investments, Inc., nor its employees, service providers, associates, or affiliates are responsible for any investment losses you may incur as a result of using the information provided herein. Remember that past investment returns may not be indicative of future returns.

Copyright © 1998-2017 ETFOptimize.com, a publication of Optimized Investments, Inc. All rights reserved.