Setting sample weights for training of network to set the contribution of each sample to the network outcome What I need to do is train a classification network (like Pattern Recognition Tool) where each sample would have a different weight. The contribution of a sample to the network error would be proportional to its weight. For example, given samples with higher and lower weights; after training the network would classify the samples with higher weights with a more success while sacrificing some correct classification of the samples with lower weights. Does anyone know how to do this? Currently my only idea on how to achieve this goal would be: For each iteration of a loop: 1. randomly assemble a subset of samples with a chance of picking a sample proportional to its weight. 2. train for 1 epoch
Prashant Kumar answered .
2025-11-20
target = ind2vec(classind); classind = vec2ind(target) % integers 1:c net = train(net, input, target); output = net(input); assigned = vec2ind(output) errors = (assigned ~= classind ) Nerr = sum(errors)
1. Weight the input matrix 2. Weight the target matrix 3. Weight the output matrix 4. Add noisy duplicates of poorly classified vectors to the input matrix.
I've forgotten the details. However, in Mar-May 2009 (5 threads) I did post results of comparing my choice of the duplication method with others for BioID classification