Pytorch/Matlab ML Toolbox
Followings are a kind of cheatsheet that I frequently refer to when I am writing/reading Pytorch code. These are not my own note. These would lead you to Pytorch and Matlab official document page or someone else note in internet.
- torch.nn (Pytorch)
- Activation Function/Transfer Function (Matlab ML Toolbox)
- compet - Competitive transfer function.
- elliotsig - Elliot sigmoid transfer function.
- hardlim - Positive hard limit transfer function.
- hardlims - Symmetric hard limit transfer function.
- logsig - Logarithmic sigmoid transfer function.
- netinv - Inverse transfer function.
- poslin - Positive linear transfer function.
- purelin - Linear transfer function.
- radbas - Radial basis transfer function.
- radbasn - Radial basis normalized transfer function.
- satlin - Positive saturating linear transfer function.
- satlins - Symmetric saturating linear transfer function.
- softmax - Soft max transfer function.
- tansig - Symmetric sigmoid transfer function.
- tribas - Triangular basis transfer function.
- Activation Functions (Pytorch)
- Loss Functions (Pytorch)
- Perform Function/Loss Functions (Matlab ML Toolbox)
- mae / Mean absolute error performance function
- mse / Mean squared normalized error performance function
- sae / Sum absolute error performance function
- sse / Sum squared error performance function
- torch.optim / Weight Update Algorithm
- Training Function / Weight Update Algorithm (Matlab ML Toolbox)
- trainb / Batch training with weight and bias learning rules
- trainbu / Batch unsupervised weight/bias training
- trainc / Cyclical order weight/bias training
- trainr / Random order incremental training with learning functions
- trains / Sequential order incremental training with learning functions
- traingd / Gradient descent backpropagation
- traingdm / Gradient descent with momentum backpropagation
- traingdx / Gradient descent with momentum and adaptive learning rate backpropagation
- trainlm / Levenberg-Marquardt backpropagation
- trainru / Unsupervised random order weight/bias training
- trainscg / Scaled conjugate gradient backpropagation
|