User Tools

Site Tools


start

This is an old revision of the document!


iFLYTEK Laboratory for Neural Computing for Machine Learning (iNCML)

Welcome to the homepage of iFLYTEK Laboratory for Neural Computing for Machine Learning (iNCML). This lab supports research in areas of neural computing models and methods for machine learning, with their applications to speech recognition and understanding, natural language processing, image/video recognition.

iNCML



iFLYTEK Laboratory for Neural Computing
for Machine Learning (iNCML)

Lassonde 2054, Department of Electrical
Engineering and Computer Science
York University, 4700 Keele Street,
Toronto, Ontario CANADA



Research Aims of the Lab

  1. Explore new neural computing models for machine learning: With the help of advanced computing resources (particularly the general purpose GPU computing platform), we will explore new neural

computing models and algorithms to take advantage of big data available in today’s mobile Internet era. Particularly, we will focus on design novel effective unsupervised learning algorithms for neural networks to explore abundant real world unlabeled data for self-learning and adaptation. Moreover, we will research more advanced neural models with long and/or short memory capabilities to explore sequential information within a longer context window for more complex AI tasks, such as human-machine dialogues.

  1. Investigate neural representations of knowledge for artificial cognition
  2. Advance machine intelligence in speech recognition and understanding, natural language processing and computer vision
  1. Explore new neural computing models for machine learning:

With the help of advanced computing resources (particularly the general purpose GPU computing platform), we will explore new neural computing models and algorithms to take advantage of big data available in today’s mobile Internet era. Particularly, we will focus on design novel effective unsupervised learning algorithms for neural networks to explore abundant real world unlabeled data for self-learning and adaptation. Moreover, we will research more advanced neural models with long and/or short memory capabilities to explore sequential information within a longer context window for more complex AI tasks, such as human-machine dialogues.

  1. Investigate neural representations of knowledge for artificial cognition

Neural models have achieved huge successes in data modeling. Comparing with data modeling, it is a more challenging problem to represent world knowledge. Knowledge representation requires organizing human knowledge (including common sense, common knowledge and domain-specific information) in an orderly way, as opposed to learning statistical models from all pooled data sets in data modeling. We will investigate a new research direction, named as neural representation of knowledge, to represent all relevant concepts, discrete in nature, as distributed representations in continuous semantic spaces and use a large-scale artificial neural networks (named as semantic brain) to store all possible relations and sematic links among these concepts. The advantages of this new approach lie in two aspects: i) Distributed representations of discrete concepts allow to use regular learning methods for knowledge presentation and the networks may be expanded in size to store as many conceptual relations as needed; ii) All concepts and their relations are stored as distributed representations in semantic brain, allowing to use some heuristic search strategies to perform the basic thinking in a humanoid way, such as reasoning and association.

  1. Advance machine intelligence in speech recognition and understanding, natural language processing and computer vision

The new neural computing models and algorithms will be applied to some multimedia AI tasks involving speech, language and image/video. More particularly, we will focus on the following three areas: i) human-machine dialogue systems via either speech or text. Its typical applications include personal assistant agent running in smart phones. ii) Deep natural language processing and understanding. Its typical applications include automatic machine Q&A systems in general domain, like semantic search engines, or some particular query system in some domains, like medical, health, legal and so on. iii) Image and video scene analysis. Its typical applications include autonomous robot navigation and controlling.

Address

iFLYTEK Laboratory for Neural Computing for Machine Learning (iNCML)
Lassonde 2054, Department of Electrical Engineering and Computer Science
York University, 4700 Keele Street, Toronto, Ontario CANADA

start.1443709349.txt.gz · Last modified: 2015/10/01 14:22 by hj