|IJCNN 2010 Invited Sessions|
Quantum-Inspired Neuro-Evolutionary Models
Dr. Marley Maria Bernardes Rebuzzi Vellasco
Pontifical Catholic University of Rio de Janeiro, Brazil
Tuesday, July 20
14:30h - 15:30h
Recurrent neural networks (RNNs) are a class of neural networks that have connections between units that form a directed cycle. This arrangement of connections make RNNs exhibit a dynamic temporal behavior which is useful, for example, in problems where the use of a short-term memory is appropriate or required. The training of RNNs, though, can be time consuming and one of the approaches to improve the learning process is the use of evolutionary computation. With new techniques of quantum-inspired evolutionary computation for problems that use a numerical representation and which are capable of providing faster optimization times, the process of training the recurrent neural network weights by using these evolutionary methods is a natural step towards a better learning process. In this talk, we will show new methods based on quantum-inspired evolutionary optimization, for recurrent neural network training and neuro-evolution in general. The proposed quantum neuro-evolutionary algorithm can be applied to define not only the neural network weights, but also the best activation function for each processor, the best neural network topology, and the most relevant input variables.
Marley Maria Bernardes Rebuzzi Vellasco received the BSc and MSc degrees in Electrical Engineering from the Pontifical Catholic University of Rio de Janeiro (PUC-Rio), Brazil, in 1984 and 1987, respectively, and the PhD degree in Computer Science from the University College London (UCL) in 1992. Dr. Vellasco is currently an Associate Professor at the Electrical Engineering Department of PUC-Rio and heads the Applied Computational Intelligence Laboratory (ICA) of PUC-Rio. She is the author of 33 papers in professional journals and more than 220 papers in conference proceedings in the area of soft computing. She has published three books and 15 book chapters. She supervised 45 MSc Dissertations and 19 Phd Thesis. Since 1991 she participated in more than 60 research projects. Her research interests include Neural Networks, Fuzzy Logic, Neuro-Fuzzy Systems, Evolutionary Computation, Robotics, and Intelligent Agents, applied to decision support systems, pattern classification, time-series forecasting, control, optimization, Knowledge Discovery Databases and Data Mining.
Self-Organization with Information Theoretic Learning
Jose C. Principe
University of Florida, USA
Wednesday, July 21
15:30h - 16:30h
Unsupervised learning algorithms seek to discover the structure in the data using solely the samples. The most important classes of unsupervised algorithms are clustering, principal curves and vector quantization, and each has been derived independently of the others. This talk presents a new self-organizing principle called the Principle of Relevant Information (PRI) that was motivated by fundamental ideas of information theory. We will also show how each of these classes of unsupervised algorithms are special cases of the PRI, weighting differently the minimization of entropy and the distance to the original data set. Examples will also be provided.
Jose C. Principe (M’83-SM’90-F’00) is a Distinguished Professor of Electrical and Computer Engineering and Biomedical Engineering at the University of Florida where he teaches advanced signal processing, machine learning and artificial neural networks (ANNs) modeling. He is BellSouth Professor and the Founder and Director of the University of Florida Computational NeuroEngineering Laboratory (CNEL). His primary area of interest is processing of complex nonstationary signals with adaptive neural models. The CNEL Lab has been studying signal and pattern recognition principles based on information theoretic criteria (entropy and mutual information).
Dr. Principe is an IEEE and AIMBE Fellow, a Doctor Honoris Causa from two Universities in Italy and Brasil, he received the Gabor Award from the International Neural Networks Society in 2006 and the IEEE EMBS Career Achievement Award in 2007. He was the past Chair of the Technical Committee on Neural Networks of the IEEE Signal Processing Society, Past-President of the International Neural Network Society, Past-Editor in Chief of the IEEE Transactions on Biomedical Engineering and Past Member of the Scientific Board of the FDA. He is the Founder Editor in Chief of the IEEE Reviews in Biomedical Engineering, and a member of the Advisory Board of the University of Florida Brain Institute. Dr. Principe has more than 400 publications. He directed 64 Ph.D. dissertations and 65 Master theses. He wrote an interactive electronic book entitled “Neural and Adaptive Systems: Fundamentals Through Simulation” published by John Wiley and Sons in 2000, co-authored a book on Brain Machine Interfaces in 2007 and is working on two new books on Kernel Adaptive Filtering and Information Theoretic Signal Processing and Machine Learning.
Linear Expansions with Nonlinear Cost Functions: Modeling, Representation, and Partitioning
Prof. Erkki Oja
Helsinki University of Technology, Finland
Thursday, July 22
11:50h - 12:50h
In modern exploratory data analysis and data mining, linear expansions for vectorized data are still widely used, despite their apparent simplicity. Linear statistical models often allow a natural interpretation of the latent variables, and may be the only possibility for very large datasets due to computational complexity. Classical examples are principal component analysis (PCA) and factor analysis (FA), newer popular ones are independent component analysis (ICA) and non‑negative matrix factorization (NMF). All these are blind latent variable models in the sense that no prior knowledge on the variables is used, except some broad properties like gaussianity, statistical independence, or non‑negativity. Adding some prior knowledge like the covariance structure allows tuning the discovered hidden patterns or signals in a semi‑blind fashion.
The talk gives a tutorial overview of this field, mostly concentrating on the results of the author's own research group. Special focus is on ICA, non‑negative ICA, variants of NMF, and some semi‑blind extensions. Two applications are covered: modeling large‑scale climate phenomena through semi‑blind latent variable models, and graph partitioning through projective NMF.
Erkki Oja (S'75-M'78-SM'90-F'00) received the Dr.Sc. degree from Helsinki University of Technology in 1977. He is Director of the Adaptive Informatics Research Centre and Professor of Computer Science at the Laboratory of Computer and Information Science, Helsinki University of Technology, Finland, and the Chairman of the Finnish Research Council for Natural Sciences and Engineering. He holds an honorary doctorate from Uppsala University, Sweden. He has been research associate at Brown University, Providence, RI, and visiting professor at the Tokyo Institute of Technology, Japan. He is the author or coauthor of more than 280 articles and book chapters on pattern recognition, computer vision, and neural computing, and three books: "Subspace Methods of Pattern Recognition" (New York: Research Studies Press and Wiley, 1983), which has been translated into Chinese and Japanese; "Kohonen Maps" (Amsterdam, The Netherlands: Elsevier, 1999), and "Independent Component Analysis" (New York: Wiley, 2001; also translated into Chinese and Japanese). His research interests are in the study of principal component and independent component analysis, self-organization, statistical pattern recognition, and applying artificial neural networks to computer vision and signal processing.
Prof. Oja is a member of the Finnish Academy of Sciences, Founding Fellow of the International Association of Pattern Recognition (IAPR), Past President of the European Neural Network Society (ENNS), and Fellow of the International Neural Network Society (INNS). He is a member of the editorial boards of several journals and has been in the program committees of several recent conferences including the International Conference on Artificial Neural Networks (ICANN), International Joint Conference on Neural Networks (IJCNN), and Neural Information Processing Systems (NIPS). Prof. Oja is the recipient of the 2006 IEEE Computational Intelligence Society Neural Networks Pioneer Award.