Superpredictive modeling for sequential analysis
Methods for machine learning and prediction utilize ensemble models that are collections of many smaller models, or metamodels, to record, describe, and ultimately predict the behavior of an observed system. While ensemble models of stochastic processes have already found success in the modern world, especially in language processing and data compression applications, contemporary methodologies are still rudimentary in that they do not fully exploit the interaction of the metamodels. Proper utilization of these interactions should lead to predictors that more quickly supply accurate information about a system's future behavior. Physical memory constraints limit the number of metamodels that computational devices can manage, and processor speeds limit the number of computations per second, resulting in computational barriers in both the quantity and precision of algorithmic operations for prediction. We concentrate on computationally efficient, algorithmic methods for both online model learning and prediction that can measure metamodel interaction in realtime without the need for externally imposed parameters that may not suit the data. Other advances involve a reduction in the memory cost of storing certain types of metamodels, allowing devices to store more metamodels which in turn increases both the size of the available model space and the ultimate flexibility of the ensemble model. The results are methods that promote interaction between metamodels, support autonomy and adaptability, offer fast learning and retrieval of observational data, and minimally impact memory resources. We also expect these new types of ensemble models and predictors, which we call a superpredictors, to support native classification and detection.