publication:arxivwork
My latest work published in arXiv.org:
- Hui Jiang, “A Latent Space Theory for Emergent Abilities in Large Language Models,” arXiv:2304.09960.
- Behnam Asadi, Hui Jiang, “On Approximation Capabilities of ReLU Activation and Softmax Output Layer in Neural Networks,” arXiv:2002.04060.
- Y. Lin, K. Ahmadi, H. Jiang, “Bandlimiting Neural Networks Against Adversarial Attacks,” arXiv:1905.12797.
- H. Jiang, “Why Learning of Large-Scale Neural Networks Behaves Like Convex Optimization”, arXiv:1903.02140.
- H. Jiang, “A New Perspective on Machine Learning: How to do Perfect Supervised Learning”, arXiv:1901.02046.
- Q. Liu, H. Jiang, Z. Ling, X. Zhu, S. Wei, Y. Hu, “Combing Context and Commonsense Knowledge Through Neural Networks for Solving Winograd Schema Problems,” arXiv:1611.04146.
- D. Liu, W. Lin, S. Zhang, S. Wei, H. Jiang, “Neural Networks Models for Entity Discovery and Linking,” arXiv:1611.03558.
- R. Soltani, H. Jiang, “Higher Order Recurrent Neural Networks,” arXiv:1605.00064.
- Q. Liu, Z. Ling, H. Jiang, Y. Hu, “Part-of-Speech Relevance Weights for Learning Word Embeddings,” arXiv:1603.07695.
- D. J. Im, C. D. Kim, H. Jiang, R. Memisevic, “Generating images with recurrent adversarial networks,”arXiv:1602.05110.
publication/arxivwork.txt · Last modified: 2023/04/24 17:19 by hj