Hi guys, I have a rather fundamental question regarding the analysis of my data involving nonlinear fitting and I hope it is appropriate to post it here. For the sake of brevity I will not provide the whole code and only summarize the essential steps, but of course I can add any details you request. I have some data which represents some response to a stimulus as a function of the distance to the stimulation site. The data shows, as to be expected, a decay in the response variable, which may be best approximated by a sigmoidal fit. So I applied the BOLTZMANN equation to the data and let MATLAB predict confidence bounds for new observations: % Define model function (BOLTZMANN); f = @(beta0,conds)beta0(1) + ((beta0(2)-beta0(1)) ./ (1+exp((beta0(3) - conds) ./ beta0(4)))); % Find initialization parameters: resp50 = (max(resp) + min(resp))/2; x50 = 5000; %Educated guess inidat = [0,max(resp),resp50,x50]; % Estimate the fitted function: [beta,res,jac,covb] = nlinfit(conds',fliplr(resp),f,inidat); % Fit the function: xfit = linspace(min(conds),max(conds),100); [yfit,delta,n,df,varpred] = nlpredci(f,xfit,beta,res,'Covar',covb,'PredOpt','observation'); %Function edited, see below yfit = fliplr(yfit); delta = fliplr(delta'); varpred = fliplr(varpred'); Behold the plotted result (Embedding this image did not work.) I am now adressing the question, how far I can get off the reference site until responses are to be regarded non-maximum. I.e. up from which distance are my (predicted) responses signicantly different to the maximum a 0 mm? I did not find a pre-described solution to such a question, so I developed a little bit naively my own approach, and I would like to ask you to tell me if it is appropriate or if there is some superior method. My idea was simply to run multiple pairwise t-tests given the statistics from the NLINFIT function (which I edited as to return sample size n, degrees of freedom v, and predicted variance varpred, so I would not have to do the calculations on my own). Thus, I iterate through the predictions unless the tested pair is significantly different: alpha = 0.05; for i=2:length(yfit) testdiff = yfit(1) - yfit(i); %Common MSE is mean of both estimated variances (s. ONLINESTATBOOK p.376): mse = (varpred(1) + varpred(i))/2; %Common SE: testse = sqrt(2*mse/n); %Correct? %Compute t-value: t = testdiff/testse; %Common df: testdf = (n-1) + (n-1); %Correct? p = tpdf(t,testdf); if p < (alpha/(i-1)) %With BONFERRONI correction (correct?) m = i; break; end end As you can see, I also tried to add some BONFERRONI correction of the alpha-level to account for these multiple comparisions. I am aware, that the t-test I used may be inappropriate for correlated pairs (which is evidently the case). According to my rule of thumb, I would expect a cut-off x-value somewhere were the confidence intervals of the fit do not intersect anymore. Surprisingly, I obtain a way earlier cut-off as you can see in the picture above.
Prashant Kumar answered .
2025-11-20