Error functions, how to minimize errors (gradient descent)
What is alpha
Gradient Descent
Gradient descent keeps changing the Parameters to reduce the cost function gradually. With each iteration we shall come closer to a minimum. With each iteration the parameters must be adapted simultaneously! The size of a “step”/iteration is determined by the parameter alpha (the learning rate).
https://towardsdatascience.com/machine-learning-basics-part-1-concept-of-regression-31982e8d8ced
Partial Derivative Function
How to tune algorithms
Add parameters - time series, lags
Regularization - ridge, lasso
minimization of coeffeiences
When to use what
Process, steps, examples of data prep
https://scikit-learn.org/stable/modules/preprocessing.html
Scaling, one-hot encoding, outliers
One hot encoding is a process by which categorical variables are converted into a form that could be provided to ML algorithms to do a better job in prediction
Error functions, how to minimize errors (gradient descent)
What is alpha
How to tune algorithms
Regularization - ridge, lasso
When to use what
Process, steps, examples of data prep
Scaling, one-hot encoding, outliers
Demantra | Oracle Products - to predict demand using Time series modeling (Lags, Dummy Variables, Time Series)
Gradient descent - Derivation, partial differentiation
PCA analysis - Derivation
supply chain management machine learning
ARIMA Model for Time Series Forecasting in Python
Exploratory data analysis with Spark
Cholesky transform
https://en.wikipedia.org/wiki/Cholesky_decomposition
Lower diagnol matrix
We are going to calculate a matrix that summarizes how our variables all relate to one another.
We’ll then break this matrix down into two separate components: direction and magnitude.
https://towardsdatascience.com/a-one-stop-shop-for-principal-component-analysis-5582fb7e0a9c
https://www.kaggle.com/nishantbigdata/exploratory-data-analysis-with-spark
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.