module = Linear(inputDimension,outputDimension)

Applies a linear transformation to the incoming data, i.e. y= Ax+b. The input tensor given in forward(input) must be a vector (1D tensor).

You can create a layer in the following way:

 module= nn.Linear(10,5)  -- 10 inputs, 5 outputs
Usually this would be added to a network of some kind, e.g.:
 mlp = nn.Sequential();
 mlp:add(module)
The weights and biases (A and b) can be viewed with:
 print(module.weight)
 print(module.bias)
The gradients for these weights can be seen with:
 print(module.gradWeight)
 print(module.gradBias)
As usual with nn modules, applying the linear transformation is performed with:
 x=torch.Tensor(10) -- 10 inputs
 y=module:forward(x)