 How do I link a Website event to the activation of an or multiple LED(s)
 Calculating pH of remaining solution after acidbase reaction
 Why the boiling and melting points of nalkanes are higher than those of iso or neo?
 How to evaluate enthalpy variation of combustion of sucrose in a constant volume calorimeter?
 Can corrosion from hydrogen peroxide solution model the corrosion protection activities of Al coating?
 What are the parameters of protonation of a molecule?
 How to get Sodium(Na, Pure) from Cooking Salt (NaCl)?
 How to get back to “enter SIM pin” screen?
 How do I answer questions about my unemployment period?
 What is a polite (not 100%) way to say 'you mind your own business'?
 I have five plus years experience as graphic designer but i done my graduation very recently.
 My contract is ending. Is it acceptable to just leave without any talk with the employer?
 How to deal with spoilers at work?
 Asking for a (level) demotion?
 basic mapping reductions without using Turing machines
 How to systematically think about building implicit tree or a search space graph?
 Count arrays with size n, sum k and largest element m
 Polynomial time approximation of NPC problem
 What is the Name of This Problem?
 Is there an algorithm that can solve chess within the span of a human lifetime?
Covariate shift detection
Is there any standard approach for detecting the covariate shift between the training and test data ? This would be useful to validate the assumption that covariate shift exists in my database which contains a few hundred images.
There are methods like the KullbackLeibler divergence model, the WaldWolfowitz test for detecting nonrandomness and covariance shift.
A simple test for quick analysis of covariance test would be to build a machine learning model, where the model is repeatedly tested with inputting training data and the production data.
In case, the model can make out the difference between the training and production datasets then it can be a sign of covariance shift.
Adaptive learning with covariate shiftdetection for motor imagerybased brain–computer interface
http://link.springer.com/article/10.1007/s0050001519375
EWMA model based shiftdetection methods for detecting covariate shifts in nonstationary environments (http://www.scienced

There are methods like the KullbackLeibler divergence model, the WaldWolfowitz test for detecting nonrandomness and covariance shift.
A simple test for quick analysis of covariance test would be to build a machine learning model, where the model is repeatedly tested with inputting training data and the production data.
In case, the model can make out the difference between the training and production datasets then it can be a sign of covariance shift.
20170717 11:51:58 
Adaptive learning with covariate shiftdetection for motor imagerybased brain–computer interface
http://link.springer.com/article/10.1007/s0050001519375
EWMA model based shiftdetection methods for detecting covariate shifts in nonstationary environments (http://www.sciencedirect.com/science/article/pii/S0031320314002878)
20170717 11:58:54 
Here is a simple procedure you can use:
learn a classifier to distinguish between train/test data (using regular X features)
compute the phi correlation coefficient to estimate the quality of the classifier = the separability of the train/test data
set a threshold (e.g. .2) above which you can claim there is a covariate shift (and start looking as corrections)
20170717 12:30:57 
You don't give many clues about what properties of the images you might be considering, but it seems that what you might want to measure is the difference in the distributions of the training and tests sets. A useful place to start would be with the Kullback–Leibler divergence, which is a measure of the difference of two distributions.
20170717 12:46:02 
The problem of covariate shift ultimately results in datasets with different underlying mathematical structure. Now, Manifold Learning estimates a low dimensional representation of highdimensional data thereby revealing the underlying structure. Often Manifold Learning techniques are not projections  therefore, different and more powerful, than standard PCA.
I've used Manifold Learning techniques (for eg: IsoMap, MDS, etc) to visualize (and, if possible, quantify) the "(dis)similarity" between train and test datasets.
6 days ago