Inteligência Artificial e os rumos do processamento do português brasileiro

Autores

DOI:

https://doi.org/10.1590/s0103-4014.2021.35101.005

Palavras-chave:

Processamento de língua natural, Redes neurais, Contexto linguístico, Português brasileiro

Resumo

Neste artigo apresentamos um posicionamento sobre a área de processamento de língua natural em português, seus desenvolvimentos desde o princípio até a explosão de aplicações modernas baseadas em aprendizado de máquina. Exploramos os desafios que a área necessita enfrentar no momento, tanto de natureza técnica quanto de natureza ética e moral, e concluímos com a inabalável associação do processamento de língua natural com os estudos linguísticos.

Downloads

Os dados de download ainda não estão disponíveis.

Referências

AHO, A. V. et al. Compilers: principles, techniques, and tools. New York: Addison

-Wesley Longman Publishing Co., Inc. 1986.

AIZERMAN, M. A. et al. Theoretical Foundations of the Potential Function Method

in Pattern Recognition Learning. Automation and Remote Control, n.25, p.821-37,

ANDERSON, D.; BURNHAM, K. Model Selection and Multi-Model Inference. 2.ed.

New York: Springer, 2004.

BAHDANAU, D. et al. Neural Machine Translation by Jointly Learning to Align and

Translate. 2016. Disponível em: <http://arxiv.org/abs/1409.0473>.

BARTLETT, P. L. et al. Almost Linear Vc-Dimension Bounds for Piecewise Polynomial

Networks. Neural Comput., v.10, n.8, p.2159-73, 1998. Disponível em:

uni-trier.de/db/journals/neco/neco10.html#BartlettMM98>.

BENDER, E. M.; KOLLER, A. Climbing Towards NLU: On Meaning, Form, and

Understanding in the Age of Data. In: PROCEEDINGS OF THE 58TH ANNUAL

MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS,

p.5185-98. 2020. Online: Association for Computational Linguistics. Disponível em:

<https://doi.org/10.18653/v1/2020.acl-main.463>.

BENTHEM, J. van. Language in Action. s. l.: MIT Press, 1995.

BLEI, D. M. Latent Dirichlet Allocation. J. Mach. Learn. Res., v.3, p.993-1022, 2003.

Disponível em: <https://doi.org/http://dx.doi.org/10.1162/jmlr.2003.3.4-5.993>.

BROWN, T. B. et al. Language Models Are Few-Shot Learners. 2020. Disponível em:

<http://arxiv.org/abs/2005.14165>.

BUCHANAN, B. G. A (Very) Brief History of Artificial Intelligence. AI Magazine,

v.26, n.4, p.53, 2005.

CARPENTER, B. Type-Logical Semantics. Cambridge: The MIT Press, 1997.

CHARNIAK, E. Statistical Language Learning. Cambridge: The MIT Press, 1993.

CHOMSKY, N. Aspects of the Theory of Syntax. Cambridge: The MIT Press, 1965.

Disponível em: <http://www.amazon.com/Aspects-Theory-Syntax-Noam-Chomsky/

dp/0262530074>.

CLARK, A.; LAPPIN, S. Unsupervised Learning and Grammar Induction. The Handbook of Computational Linguistics and Natural Language Processing, n.57, 2010.

DAMERAU, F. J. Markov Models and Linguistic Theory. De Gruyter Mouton, 1971.

Disponível em: <https://doi.org/doi:10.1515/9783110908589>.

DEVLIN, J. et al. BERT: Pre-Training of Deep Bidirectional Transformers for Language Understanding. 2019. Disponível em: <http://arxiv.org/abs/1810.04805>.

DWIVEDI, V. P. et al. Benchmarking Graph Neural Networks. 2020. Disponível em:

<http://arxiv.org/abs/2003.00982>.

HARRELL, F. E. Regression Modeling Strategies: With Applications to Linear Models,

Logistic Regression, and Survival Analysis. Springer, 2001.

HOCHREITER, S. et al. Gradient Flow in Recurrent Nets: The Difficulty of Learning

Long-Term Dependencies. In KREMER, S. C.; KOLEN, J. F. (Ed.) A Field Guide to

Dynamical Recurrent Neural Networks. s.l.: IEEE Press, 2001.

HOPCROFT, J. E.; ULLMAN, J. D. Introduction to Automata Theory, Languages,

and Computation. s. l.: Addison-Wesley Publishing Company, 1979.

HORNIK, K. et al. Multilayer Feedforward Networks Are Universal Approximators.

Neural Networks, v.2, n.5, p.359-66, 1989. Disponível em: <https://doi.org/http://

dx.doi.org/10.1016/0893-6080(89)90020-8>.

JURAFSKY, D.; MARTIN, J. H. Speech and Language Processing: An Introduction

to Natural Language Processing, Computational Linguistics, and Speech Recognition.

New York: Prentice Hall PTR, 2000.

KOEHN, P. Statistical Machine Translation. s. l.: Cambridge University Press, 2009.

Disponível em: <https://doi.org/10.1017/CBO9780511815829>.

LAMBEK, J. The Mathematics of Sentence Structure. American Mathematical Monthly, v.65, p.154-69, 1958.

LIU, Y. et al. RoBERTa: A Robustly Optimized Bert Pretraining Approach, 2019.

Disponível em: <http://arxiv.org/abs/1907.11692>.

LUONG, M.-T. et al. Effective Approaches to Attention-Based Neural Machine Translation. 2015. Disponível em: <http://arxiv.org/abs/1508.04025>.

MAASS, W. et al. A Comparison of the Computational Power of Sigmoid and Boolean

Threshold Circuits. In: ROYCHOWDHURY, V. et al. (Ed.) Theoretical Advances in Neural Computation and Learning. Boston, MA: Springer US, 1994. p.127-50. Disponível em: <https://doi.org/10.1007/978-1-4615-2696-4_4.>

MANNING, C. D.; SCHÜTZE, H. Foundations of Statistical Natural Language Processing. Cambridge, MA: MIT Press, 1999.

MARTIN, L. et al. CamemBERT: A Tasty French Language Model. In: PROCEEDINGS OF THE 58TH ANNUAL MEETING OF THE ASSOCIATION FOR

COMPUTATIONAL LINGUISTICS, 2020. p.7203-19. Dispnível em:

org/10.18653/v1/2020.acl-main.645>.

MIKOLOV, T. et al. Efficient Estimation of Word Representations in Vector Space.

Disponível em: <http://arxiv.org/abs/1301.3781>.

MIKOLOV, T. et al. Distributed Representations of Words and Phrases and Their Compositionality. In: BURGES, C. J. C et al (Ed.) Advances in Neural Information Processing Systems. Curran Associates, Inc., 2016. v.26. Disponível em:

neurips.cc/paper/2013/file/9aa42b31882ec039965f3c4923ce901b-Paper.pdf>.

MINSKY, M.; PAPERT, S. Perceptrons. Cambridge, MA: The MIT Press, 1969.

MOORTGAT, M. Categorial Type Logics. In: BENTHEM, A. van; MEULEN, A. T,

(Ed.) Handbook of Logic and Language. Elsevier North-Holland: The MIT Press. 1997.

NEWELL, A.; SIMON, H. A. GPS, a Program That Simulates Human Thought. In:

FEIGENBAUM, E. A.; FELDMAN, J. (Ed.) Computers and Thought. s. l.: McGraw-

-Hill, 1963. p.279-93.

NOVIKOFF, A. B. On Convergence Proofs on Perceptrons. In: PROCEEDINGS OF

THE SYMPOSIUM ON THE MATHEMATICAL THEORY OF AUTOMATA, 12,

p.615-22. Polytechnic Institute of Brooklyn, New York, 1962.

OCH, F. J.; NEY, H. 2002. Discriminative Training and Maximum Entropy Models

for Statistical Machine Translation. In: PROCEEDINGS OF THE 40TH ANNUAL

MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS,

p.295-302. 2002.

PAPADIMITRIOU, H. Computational Complexity. s.l.: Addison-Wesley, 1994.

PEREIRA, F. C. N.; SHIEBER, S. M. Prolog and Natural-Language Analysis. s.l.:

Center for the Study of Language; Information, 1987.

PIRES, T. et al. How Multilingual Is Multilingual Bert?. 2019. Disponível em:

arxiv.org/abs/1906.01502>.

ROSENBLATT, F. The perceptron: A probabilistic model for information storage and

organization in the brain. Psychological Review, v.65, n.6, p.386-408, 1958. Disponível

em: <https://doi.org/10.1037/h0042519>.

ROSENBLATT, F. Principles of Neurodynamics. New York: Spartan, 1962.

RUMELHART, D. E. et al. Learning Internal Representations by Error Propagation.

In: RUMELHART, D. E.; MCCLELLAND, J. L. Parallel Distributed Processing: Explorations in the Microstructure of Cognition. Cambridge, MA: The MIT Press, 1986a.

v.1: Foundations, p.318-62.

RUMELHART, D. E.. Learning Representations by Back-Propagating Errors. Nature, v.323, n.6088,

p.533-36, 1986b. Disponível em: <http://dx.doi.org/10.1038/323533a0>.

SALVATORE, F. et al. A Logical-Based Corpus for Cross-Lingual Evaluation. In: Deep

Learning for Low-Resource Nlp Workshop at Emnlp 2019. 2019.

SANH, V. DistilBERT, a Distilled Version of Bert: Smaller, Faster, Cheaper and Lighter. 2020. Disponível em: <http://arxiv.org/abs/1910.01108>.

SCARSELLI, F. et al. The Graph Neural Network Model. IEEE Transactions on Neural Networks, v.20, n.1, p.61-80, 2009. Disponível em: <https://doi.org/10.1109/

TNN.2008.2005605>.

SOUZA, F. et al. BERTimbau: Pretrained Bert Models for Brazilian Portuguese. In:

CERRI, R.; PRATI, R. C. (Ed.) Intelligent Systems. Cham: Springer International Publishing, 2020. p.403-17.

SUTSKEVER, I. et al. Sequence to Sequence Learning with Neural Networks. 2014.

Disponível em: <http://arxiv.org/abs/1409.3215>.

VAPNIK, V. N. The Nature of Statistical Learning Theory. New York: Springer-Verlag

Inc., 1995.

VASWANI, A. et al. Attention Is All You Need. 2017. Disponível em:

org/abs/1706.03762>.

WAGNER FILHO, J. A. et al. The BrWaC Corpus: A New Open Resource for Brazilian

Portuguese. In: PROCEEDINGS OF THE ELEVENTH INTERNATIONAL CONFERENCE ON LANGUAGE RESOURCES AND EVALUATION (LREC 2018).

Miyazaki, Japan: European Language Resources Association (ELRA). 2018. Disponível

em: <https://www.aclweb.org/anthology/L18-1686>.

YANG, Z. et al. XLNet: Generalized Autoregressive Pretraining for Language Understanding. 2020. Disponível em: <http://arxiv.org/abs/1906.08237>.

Downloads

Publicado

2021-04-30

Edição

Seção

Inteligência Artificial

Como Citar

Finger, M. (2021). Inteligência Artificial e os rumos do processamento do português brasileiro. Estudos Avançados, 35(101), 51-72. https://doi.org/10.1590/s0103-4014.2021.35101.005