-
- OSCaR: Orthogonal Subspace Correction and Rectification of Biases in Word Embeddings.
Sunipa Dev and Tao Li and Jeff M Phillips and Vivek Srikumari
arXiv preprint arXiv:2007.00049 (2020)
- OSCaR: Orthogonal Subspace Correction and Rectification of Biases in Word Embeddings.
-
- What are the biases in my word embedding?
N. Swinger and M. De-Arteaga and N. T. H. IV and M. D. M. Leiserson and A. T. Kalai
arxiv preprint arXiv:1812.08769 (2018)
- What are the biases in my word embedding?
-
- The Trouble with Bias
K. Crawford
Conference on Neural Information Processing Systems, Keynote (2017)
- The Trouble with Bias
-
- Social bias in Elicited Natural Language Inferences
R. Rudinger and C. May and B. Van Durme
Proceedings of the 1st ACL Workshop on Ethics in Natural Language Processing 74-79 (2017)
- Social bias in Elicited Natural Language Inferences
-
- On Measuring Social Biases in Sentence Encoders
C. May and A. Wang and S. Bordia and S. R. Bowman and R. Rudinger
arxiv preprint arXiv:1903.10561 (2019)
http://arxiv.org/abs/1903.10561
- On Measuring Social Biases in Sentence Encoders
-
- On Measuring and Mitigating Biased Inferences of Word Embeddings
S. Dev and T. Li and J. M. Phillips and V. Srikumar
AAAI (2020)
- On Measuring and Mitigating Biased Inferences of Word Embeddings
-
- Offline bilingual word vectors, orthogonal transformations and the inverted softmax
S. L. Smith and D. H. P. Turban and S. Hamblin and N. Y. Hammerla
International Conference on Learning Representations (2017)
- Offline bilingual word vectors, orthogonal transformations and the inverted softmax
-
- Null It Out: Guarding Protected Attributes by Iterative Nullspace Projection
S. Ravfogel and Y. Elazar and H. Gonen and M. Twiton and Y. Goldberg
arXiv preprint arXiv:2004.07667 (2020)
- Null It Out: Guarding Protected Attributes by Iterative Nullspace Projection
-
- Mitigating Gender Bias in Natural Language Processing: Literature Review
T. Sun and A. Gaut and S. Tang and Y. Huang and M. ElSherief and J. Zhao and D. Mirza and E. M. Belding and K.-W. Chang and W. Y. Wang
arXiv preprint arXiv:1906.08976 (2019)
http://arxiv.org/abs/1906.08976
- Mitigating Gender Bias in Natural Language Processing: Literature Review
-
- Mind the GAP: A Balanced Corpus of Gendered Ambiguous Pronouns
K. Webster and M. Recasens and V. Axelrod and J. Baldridge
Transactions of the Association for Computational Linguistics 6 605-617 (2018)
- Mind the GAP: A Balanced Corpus of Gendered Ambiguous Pronouns
-
- Lipstick on a Pig: Debiasing Methods Cover up Systematic Gender Biases in Word Embeddings But do not Remove Them
H. Gonen and Y. Goldberg
Proceedings of the Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL-HLT) 609-614 (2019)
- Lipstick on a Pig: Debiasing Methods Cover up Systematic Gender Biases in Word Embeddings But do not Remove Them
-
- Learning Gender-Neutral Word Embeddings
J. Zhao and Y. Zhou and Z. Li and W. Wang and K.-W. Chang
Proceedings of the Conference on Empirical Methods in Natural Language Processing ( 2018)
https://www.aclweb.org/anthology/D18-1521
https://doi.org/10.18653/v1/D18-1521
- Learning Gender-Neutral Word Embeddings
-
- Language Technology is Power: A Critical Survey of ``Bias'' in NLP
S. L. Blodgett and S. Barocas and H. Daumé III and H. Wallach
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics (2020)
https://www.aclweb.org/anthology/2020.acl-main.485
https://doi.org/10.18653/v1/2020.acl-main.485
- Language Technology is Power: A Critical Survey of ``Bias'' in NLP
-
- Gender Bias in Coreference Resolution
R. Rudinger and J. Naradowsky and B. Leonard and B. Van Durme
Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers) 8-14 (2018)
- Gender Bias in Coreference Resolution
-
- Gender Bias in Contextualized Word Embeddings
J. Zhao and T. Wang and M. Yatskar and R. Cotterell and V. Ordonez and K.-W. Chang
Proceedings of the 2019 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers) 629-634 (2019)
https://www.aclweb.org/anthology/N19-1064
https://doi.org/10.18653/v1/N19-1064
- Gender Bias in Contextualized Word Embeddings
-
- Gender as a Variable in Natural-Language Processing: Ethical Considerations
B. Larson
Proceedings of the 1st ACL Workshop on Ethics in Natural Language Processing (2017)
https://www.aclweb.org/anthology/W17-1601
https://doi.org/10.18653/v1/W17-1601
- Gender as a Variable in Natural-Language Processing: Ethical Considerations
-
- Fairness in Representation: Quantifying Stereotyping as a Representational Harm
M. Abbasi and S. A. Friedler and C. Scheidegger and S. Venkatasubramanian
Proceedings of the 2019 SIAM International Conference on Data Mining (2019)
https://epubs.siam.org/doi/pdf/10.1137/1.9781611975673.90
https://doi.org/10.1137/1.9781611975673.90
- Fairness in Representation: Quantifying Stereotyping as a Representational Harm
-
- Consumer credit-risk models via machine-learning algorithms
A. E. Khandani and A. J. Kim and A. Lo
Journal of Banking \& Finance 34 2767-2787 (2010)
https://EconPapers.repec.org/RePEc:eee:jbfina:v:34:y:2010:i:11:p:2767-2787
- Consumer credit-risk models via machine-learning algorithms
-
- Big data's disparate impact
S. Barocas and A. D. Selbst
California Law Review 104 671 (2016)
- Big data's disparate impact
-
- Attenuating Bias in Word vectors
S. Dev and J. M. Phillips
Proceedings of Machine Learning Research, PMLR 879-887 (2019)
http://proceedings.mlr.press/v89/dev19a.html
- Attenuating Bias in Word vectors
-
- Assessing Social and Intersectional Biases in Contextualized Word Representations
Y. C. Tan and L. E. Celis
arXiv preprint arXiv:1911.01485 (2019)
- Assessing Social and Intersectional Biases in Contextualized Word Representations
-
- A General Framework for Implicit and Explicit Debiasing of Distributional Word Vector Spaces
A. Lauscher and G. Glavas and S. P. Ponzetto and I. Vulic
AAAI (2020)
- A General Framework for Implicit and Explicit Debiasing of Distributional Word Vector Spaces
-
- A Decomposable Attention Model for Natural Language Inference
A. Parikh and O. Täckström and D. Das and J. Uszkoreit
Conference on Empirical Methods in Natural Language Processing 2249-2255 (2016)
- A Decomposable Attention Model for Natural Language Inference
-
- The Technique of Semantics
J. R. Firth
Transactions of the Philological Society 24 36-73 (1935)
https://doi.org/10.1111/j.1467-968X.1935.tb01254.x
- The Technique of Semantics
-
- A Synopsis of Linguistic Theory, 1930-1955
J. R. Firth
Studies in linguistic analysis (1957)
- A Synopsis of Linguistic Theory, 1930-1955
-
- Distributed Representations of Words and Phrases and Their Compositionality
T. Mikolov and I. Sutskever and K. Chen and G. S. Corrado and J. Dean
Advances in Neural Information Processing Systems 3111-3119 (2013)
- Distributed Representations of Words and Phrases and Their Compositionality
-
- Efficient Estimation of Word Representations in Vector Space
T. Mikolov and K. Chen and G. Corrado and J. Dean
arXiv preprint arXiv:1301.3781 (2013)
- Efficient Estimation of Word Representations in Vector Space
-
- Linguistic Regularities in Continuous Space Word Representations
T. Mikolov and W.-t. Yih and G. Zweig
Proceedings of the Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL-HLT) 13 746-751 (2013)
- Linguistic Regularities in Continuous Space Word Representations
-
- Glove: Global Vectors for Word Representation
J. Pennington and R. Socher and C. D. Manning
Proceedings of the Empirical Methods in Natural Language Processing (EMNLP) 1532-1543 (2014)
- Glove: Global Vectors for Word Representation
-
- Fasttext.Zip: Compressing Text Classification Models
A. Joulin and E. Grave and P. Bojanowski and M. Douze and H. Jégou and T. Mikolov
arXiv preprint arXiv:1612.03651 (2016)
- Fasttext.Zip: Compressing Text Classification Models
-
- Deep Contextualized Word Representations
M. Peters and M. Neumann and M. Iyyer and M. Gardner and C. Clark and K. Lee and L. Zettlemoyer
Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies 2227-2237 (2018)
https://doi.org/10/gft5gf
- Deep Contextualized Word Representations
-
- BERT: Pre-Training of Deep Bidirectional Transformers for Language Understanding
J. Devlin and M.-W. Chang and K. Lee and K. Toutanova
Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies 4171-4186 (2019)
- BERT: Pre-Training of Deep Bidirectional Transformers for Language Understanding
-
- RoBERTa: A Robustly Optimized {BERT} Pretraining Approach
Y. Liu and M. Ott and N. Goyal and J. Du and M. Joshi and D. Chen and O. Levy and M. Lewis and L. Zettlemoyer and V. Stoyanov
arXiv preprint arXiv:1907.11692 (2019)
- RoBERTa: A Robustly Optimized {BERT} Pretraining Approach
-
- Large Image Datasets: A Pyrrhic Win for Computer Vision?
A. Birhane and V. U. Prabhu
Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision 1537-1547 (2021)
- Large Image Datasets: A Pyrrhic Win for Computer Vision?