Opportunistic activity and context recognition systems draw from the characteristic to use sensing devices that just happen to be available instead of pre-defining them at the design time of the system in order to achieve a recognition goal at runtime. Whenever a user and/or application states a recognition goal at runtime to the system, the available sensing devices configure an ensemble of the best available set of sensors for the specified recognition goal. This paper presents an approach to show how machine learning technologies (classification, fusion and anomaly detection) are integrated in a prototypical opportunistic activity and context recognition system (referred to as the OPPORTUNITY Framework). We define a metric that quantifies the ensemble's capabilities according to a recognition goal and evaluate the approach with respect to the requirements of an opportunistic system (e.g. to compute an ensemble's configuration and reconfiguration at runtime).