Generalized Linear Models

What Are Generalized Linear Models?

Linear regression models describe a linear relationship between a response and one or more predictive terms. Many times, however, a nonlinear relationship exists. Nonlinear Regression describes general nonlinear models. A special class of nonlinear models, called generalized linear models, uses linear methods.

Recall that linear models have these characteristics:

  • At each set of values for the predictors, the response has a normal distribution with mean μ.

  • A coefficient vector b defines a linear combination Xb of the predictors X.

  • The model is μ = Xb.

In generalized linear models, these characteristics are generalized as follows:

  • At each set of values for the predictors, the response has a distribution that can be normal, binomial, Poisson, gamma, or inverse Gaussian, with parameters including a mean μ.

  • A coefficient vector b defines a linear combination Xb of the predictors X.

  • link function f defines the model as f(μ) = Xb.

Prepare Data

To begin fitting a regression, put your data into a form that fitting functions expect. All regression techniques begin with input data in an array X and response data in a separate vector y, or input data in a table or dataset array tbl and response data as a column in tbl. Each row of the input data represents one observation. Each column represents one predictor (variable).

For a table or dataset array tbl, indicate the response variable with the 'ResponseVar' name-value pair:

mdl = fitglm(tbl,'ResponseVar','BloodPressure');

The response variable is the last column by default.

You can use numeric categorical predictors. A categorical predictor is one that takes values from a fixed set of possibilities.

  • For a numeric array X, indicate the categorical predictors using the 'Categorical' name-value pair. For example, to indicate that predictors 2 and 3 out of six are categorical:

    mdl = fitglm(X,y,'Categorical',[2,3]);
    % or equivalently
    mdl = fitglm(X,y,'Categorical',logical([0 1 1 0 0 0]));
  • For a table or dataset array tbl, fitting functions assume that these data types are categorical:

    • Logical vector

    • Categorical vector

    • Character array

    • String array

    If you want to indicate that a numeric predictor is categorical, use the 'Categorical' name-value pair.

Represent missing numeric data as NaN. To represent missing data for other data types, see Missing Group Values.

  • For a 'binomial' model with data matrix X, the response y can be:

    • Binary column vector — Each entry represents success (1) or failure (0).

    • Two-column matrix of integers — The first column is the number of successes in each observation, the second column is the number of trials in that observation.

  • For a 'binomial' model with table or dataset tbl:

    • Use the ResponseVar name-value pair to specify the column of tbl that gives the number of successes in each observation.

    • Use the BinomialSize name-value pair to specify the column of tbl that gives the number of trials in each observation.

Dataset Array for Input and Response Data

For example, to create a dataset array from an Excel® spreadsheet:

ds = dataset('XLSFile','hospital.xls',...
    'ReadObsNames',true);

To create a dataset array from workspace variables:

load carsmall
ds = dataset(MPG,Weight);
ds.Year = ordinal(Model_Year);

Table for Input and Response Data

To create a table from workspace variables:

load carsmall
tbl = table(MPG,Weight);
tbl.Year = ordinal(Model_Year);

Numeric Matrix for Input Data, Numeric Vector for Response

For example, to create numeric arrays from workspace variables:

load carsmall
X = [Weight Horsepower Cylinders Model_Year];
y = MPG;

To create numeric arrays from an Excel spreadsheet:

[X, Xnames] = xlsread('hospital.xls');
y = X(:,4); % response y is systolic pressure
X(:,4) = []; % remove y from the X matrix

Notice that the nonnumeric entries, such as sex, do not appear in X.

Choose Generalized Linear Model and Link Function

Often, your data suggests the distribution type of the generalized linear model.

Response Data Type Suggested Model Distribution Type
Any real number 'normal'
Any positive number 'gamma' or 'inverse gaussian'
Any nonnegative integer 'poisson'
Integer from 0 to n, where n is a fixed positive value 'binomial'

Set the model distribution type with the Distribution name-value pair. After selecting your model type, choose a link function to map between the mean µ and the linear predictor Xb.

Value Description
'comploglog'

log(–log((1 – µ))) = Xb

'identity', default for the distribution 'normal'

µ = Xb

'log', default for the distribution 'poisson'

log(µ) = Xb

'logit', default for the distribution 'binomial'

log(µ/(1 – µ)) = Xb

'loglog'

log(–log(µ)) = Xb

'probit'

Φ–1(µ) = Xb, where Φ is the normal (Gaussian) cumulative distribution function

'reciprocal', default for the distribution 'gamma'

µ–1 = Xb

p (a number), default for the distribution 'inverse gaussian' (with p = –2)

µp = Xb

A cell array of the form {FL FD FI}, containing three function handles created using @, which define the link (FL), the derivative of the link (FD), and the inverse link (FI). Or, a structure of function handles with the field Link containing FL, the field Derivative containing FD, and the field Inverse containing FI.

User-specified link function (see Custom Link Function)

The nondefault link functions are mainly useful for binomial models. These nondefault link functions are 'comploglog''loglog', and 'probit'.

Custom Link Function

The link function defines the relationship f(µ) = Xb between the mean response µ and the linear combination Xb = X*b of the predictors. You can choose one of the built-in link functions or define your own by specifying the link function FL, its derivative FD, and its inverse FI:

  • The link function FL calculates f(µ).

  • The derivative of the link function FD calculates df(µ)/.

  • The inverse function FI calculates g(Xb) = µ.

You can specify a custom link function in either of two equivalent ways. Each way contains function handles that accept a single array of values representing µ or Xb, and returns an array the same size. The function handles are either in a cell array or a structure:

  • Cell array of the form {FL FD FI}, containing three function handles, created using @, that define the link (FL), the derivative of the link (FD), and the inverse link (FI).

  • Structure s with three fields, each containing a function handle created using @:

    • s.Link — Link function

    • s.Derivative — Derivative of the link function

    • s.Inverse — Inverse of the link function

For example, to fit a model using the 'probit' link function:

x = [2100 2300 2500 2700 2900 ...
     3100 3300 3500 3700 3900 4100 4300]';
n = [48 42 31 34 31 21 23 23 21 16 17 21]';
y = [1 2 0 3 8 8 14 17 19 15 17 21]';
g = fitglm(x,[y n],...
    'linear','distr','binomial','link','probit')
g = 

Generalized Linear regression model:
    probit(y) ~ 1 + x1
    Distribution = Binomial

Estimated Coefficients:
                   Estimate     SE            tStat     pValue    
    (Intercept)      -7.3628       0.66815    -11.02    3.0701e-28
    x1             0.0023039    0.00021352     10.79    3.8274e-27

12 observations, 10 error degrees of freedom
Dispersion: 1
Chi^2-statistic vs. constant model: 241, p-value = 2.25e-54

You can perform the same fit using a custom link function that performs identically to the 'probit' link function:

s = {@norminv,@(x)1./normpdf(norminv(x)),@normcdf};
g = fitglm(x,[y n],...
    'linear','distr','binomial','link',s)
g = 

Generalized Linear regression model:
    link(y) ~ 1 + x1
    Distribution = Binomial

Estimated Coefficients:
                   Estimate     SE            tStat     pValue    
    (Intercept)      -7.3628       0.66815    -11.02    3.0701e-28
    x1             0.0023039    0.00021352     10.79    3.8274e-27

12 observations, 10 error degrees of freedom
Dispersion: 1
Chi^2-statistic vs. constant model: 241, p-value = 2.25e-54

The two models are the same.

Equivalently, you can write s as a structure instead of a cell array of function handles:

s.Link = @norminv;
s.Derivative = @(x) 1./normpdf(norminv(x));
s.Inverse = @normcdf;
g = fitglm(x,[y n],...
    'linear','distr','binomial','link',s)
g = 

Generalized Linear regression model:
    link(y) ~ 1 + x1
    Distribution = Binomial

Estimated Coefficients:
                   Estimate     SE            tStat     pValue    
    (Intercept)      -7.3628       0.66815    -11.02    3.0701e-28
    x1             0.0023039    0.00021352     10.79    3.8274e-27

12 observations, 10 error degrees of freedom
Dispersion: 1
Chi^2-statistic vs. constant model: 241, p-value = 2.25e-54

Choose Fitting Method and Model

There are two ways to create a fitted model.

  • Use fitglm when you have a good idea of your generalized linear model, or when you want to adjust your model later to include or exclude certain terms.

  • Use stepwiseglm when you want to fit your model using stepwise regression. stepwiseglm starts from one model, such as a constant, and adds or subtracts terms one at a time, choosing an optimal term each time in a greedy fashion, until it cannot improve further. Use stepwise fitting to find a good model, one that has only relevant terms.

    The result depends on the starting model. Usually, starting with a constant model leads to a small model. Starting with more terms can lead to a more complex model, but one that has lower mean squared error.

In either case, provide a model to the fitting function (which is the starting model for stepwiseglm).

Specify a model using one of these methods.

  • Brief Model Name

  • Terms Matrix

  • Formula

Brief Model Name

Name Model Type
'constant' Model contains only a constant (intercept) term.
'linear' Model contains an intercept and linear terms for each predictor.
'interactions' Model contains an intercept, linear terms, and all products of pairs of distinct predictors (no squared terms).
'purequadratic' Model contains an intercept, linear terms, and squared terms.
'quadratic' Model contains an intercept, linear terms, interactions, and squared terms.
'polyijk' Model is a polynomial with all terms up to degree i in the first predictor, degree j in the second predictor, etc. Use numerals 0 through 9. For example, 'poly2111' has a constant plus all linear and product terms, and also contains terms with predictor 1 squared.

Terms Matrix

A terms matrix T is a t-by-(p + 1) matrix specifying terms in a model, where t is the number of terms, p is the number of predictor variables, and +1 accounts for the response variable. The value of T(i,j) is the exponent of variable j in term i.

For example, suppose that an input includes three predictor variables x1x2, and x3 and the response variable y in the order x1x2x3, and y. Each row of T represents one term:

  • [0 0 0 0] — Constant term or intercept

  • [0 1 0 0] — x2; equivalently, x1^0 * x2^1 * x3^0

  • [1 0 1 0] — x1*x3

  • [2 0 0 0] — x1^2

  • [0 1 2 0] — x2*(x3^2)

 

The 0 at the end of each term represents the response variable. In general, a column vector of zeros in a terms matrix represents the position of the response variable. If you have the predictor and response variables in a matrix and column vector, then you must include 0 for the response variable in the last column of each row.

Formula

A formula for a model specification is a character vector or string scalar of the form

'y ~ terms',

  • y is the response name.

  • terms contains

    • Variable names

    • + to include the next variable

    • - to exclude the next variable

    • : to define an interaction, a product of terms

    • * to define an interaction and all lower-order terms

    • ^ to raise the predictor to a power, exactly as in * repeated, so ^ includes lower order terms as well

    • () to group terms

Tip

Formulas include a constant (intercept) term by default. To exclude a constant term from the model, include -1 in the formula.

Examples:

'y ~ x1 + x2 + x3' is a three-variable linear model with intercept.
'y ~ x1 + x2 + x3 - 1' is a three-variable linear model without intercept.
'y ~ x1 + x2 + x3 + x2^2' is a three-variable model with intercept and a x2^2 term.
'y ~ x1 + x2^2 + x3' is the same as the previous example, since x2^2 includes a x2 term.
'y ~ x1 + x2 + x3 + x1:x2' includes an x1*x2 term.
'y ~ x1*x2 + x3' is the same as the previous example, since x1*x2 = x1 + x2 + x1:x2.
'y ~ x1*x2*x3 - x1:x2:x3' has all interactions among x1x2, and x3, except the three-way interaction.
'y ~ x1*(x2 + x3 + x4)' has all linear terms, plus products of x1 with each of the other variables.

 

Matlabsolutions.com provides guaranteed satisfaction with a commitment to complete the work within time. Combined with our meticulous work ethics and extensive domain experience, We are the ideal partner for all your homework/assignment needs. We pledge to provide 24*7 support to dissolve all your academic doubts. We are composed of 300+ esteemed Matlab and other experts who have been empanelled after extensive research and quality check.

Matlabsolutions.com provides undivided attention to each Matlab assignment order with a methodical approach to solution. Our network span is not restricted to US, UK and Australia rather extends to countries like Singapore, Canada and UAE. Our Matlab assignment help services include Image Processing Assignments, Electrical Engineering Assignments, Matlab homework help, Matlab Research Paper help, Matlab Simulink help. Get your work done at the best price in industry.

Machine Learning in MATLAB

Train Classification Models in Classification Learner App

Train Regression Models in Regression Learner App

Distribution Plots

Explore the Random Number Generation UI

Design of Experiments

Machine Learning Models

Logistic regression

Logistic regression create generalized linear regression model - MATLAB fitglm 2

Support Vector Machines for Binary Classification

Support Vector Machines for Binary Classification 2

Support Vector Machines for Binary Classification 3

Support Vector Machines for Binary Classification 4

Support Vector Machines for Binary Classification 5

Assess Neural Network Classifier Performance

Naive Bayes Classification

ClassificationTree class

Discriminant Analysis Classification

Ensemble classifier

ClassificationTree class 2

Train Generalized Additive Model for Binary Classification

Train Generalized Additive Model for Binary Classification 2

Classification Using Nearest Neighbors

Classification Using Nearest Neighbors 2

Classification Using Nearest Neighbors 3

Classification Using Nearest Neighbors 4

Classification Using Nearest Neighbors 5

Linear Regression

Linear Regression 2

Linear Regression 3

Linear Regression 4

Nonlinear Regression

Nonlinear Regression 2

Visualizing Multivariate Data

Generalized Linear Models

Generalized Linear Models 2

RegressionTree class

RegressionTree class 2

Neural networks

Gaussian Process Regression Models

Gaussian Process Regression Models 2

Understanding Support Vector Machine Regression

Understanding Support Vector Machine Regression 2

RegressionEnsemble