Barry Van Veen
Barry Van Veen
  • Видео 195
  • Просмотров 6 580 015
Foundations of Artificial Intelligence and Machine Learning Course Promo Video
I'm very excited about this new short-form course being offered first in April through InterPro at UW-Madison. The past few months I have worked toward developing new content that condenses the key ideas in artificial intelligence and machine learning into digestible, accessible, and actionable insight. I've always enjoyed teaching learners with diverse backgrounds and interests and am looking forward to working with all who enroll. If you want to learn more about this timely and transformative topic, join us!
Просмотров: 522

Видео

Convergence, Tracking, and the LMS Algorithm Step Size
Просмотров 1,7 тыс.2 года назад
The convergence and tracking behavior of the LMS algorithm are dependent on the step size parameter applied to the instantaneous gradient. The various performance tradeoffs involved with selecting a step size parameter are discussed. Small step sizes result in small misadjustment, but can have slow convergence and poor tracking performance. Large step sizes can result in unstable iterations.
Solving the Least-Squares Problem with Gradient Descent: the Least-Mean-Square Algorithm.
Просмотров 3 тыс.2 года назад
The least-mean-square (LMS) algorithm is an iterative approach to finding the minimum mean-squared error filter weights based on taking steps in the direction of the negative gradient of the instantaneous error. The LMS algorithm is very simple and widely used in adaptive filtering.
Finding the MMSE Filter Optimum Weights
Просмотров 1,7 тыс.2 года назад
The math of solving the MMSE problem to find the optimal weights. A linear algebra formulation is used to rewrite the mean-squared error as a perfect square, which allows the MMSE weights to be identified by inspection without defining gradients and. This is the matrix equivalent of the "completing the square" method used to find the minimum of a second order polynomial.
Introduction to Minimum Mean-Squared-Error Filtering
Просмотров 2,4 тыс.2 года назад
Introduces the basic framework for MMSE filtering and applications to system modeling, equalization, and interference suppression.
Signals- The Basics
Просмотров 1,5 тыс.2 года назад
Introductory ideas and notation concerning signals.
Matrix Completion
Просмотров 6 тыс.3 года назад
Matrix Completion
Network Graphs and Page Rank Algorithm
Просмотров 12 тыс.3 года назад
Network Graphs and Page Rank Algorithm
Eigendecomposition, Singular Value Decomposition, and Power Iterations
Просмотров 4,7 тыс.3 года назад
Eigendecomposition, Singular Value Decomposition, and Power Iterations
Bias-Variance Tradeoff in Low Rank Approximations
Просмотров 6973 года назад
Bias-Variance Tradeoff in Low Rank Approximations
Principal Component Analysis
Просмотров 1,5 тыс.3 года назад
Principal Component Analysis
Singular Value Decomposition and Regularization of Least Squares Problems
Просмотров 3 тыс.3 года назад
Singular Value Decomposition and Regularization of Least Squares Problems
The Singular Value Decomposition and Least Squares Problems
Просмотров 4,3 тыс.3 года назад
The Singular Value Decomposition and Least Squares Problems
Properties of the Singular Value Decomposition
Просмотров 1,9 тыс.3 года назад
Properties of the Singular Value Decomposition
The Singular Value Decomposition
Просмотров 4,4 тыс.3 года назад
The Singular Value Decomposition
Clustering Data with the K Means Algorithm
Просмотров 1,4 тыс.3 года назад
Clustering Data with the K Means Algorithm
Low Rank Decompositions of Matrices
Просмотров 11 тыс.3 года назад
Low Rank Decompositions of Matrices
Regularization and Ridge Regression for Supervised Learning
Просмотров 8153 года назад
Regularization and Ridge Regression for Supervised Learning
Complexity, Overfitting, and Cross Validation
Просмотров 9183 года назад
Complexity, Overfitting, and Cross Validation
Geometry of the Squared Error Surface
Просмотров 8193 года назад
Geometry of the Squared Error Surface
Solving the Least Squares Problem Using Gradients
Просмотров 2,5 тыс.3 года назад
Solving the Least Squares Problem Using Gradients
Solving the Least-Squares Problem Using Geometry
Просмотров 2 тыс.3 года назад
Solving the Least-Squares Problem Using Geometry
Approximate Solutions, Norms, and the Least-Squares Problem
Просмотров 2,1 тыс.3 года назад
Approximate Solutions, Norms, and the Least-Squares Problem
Representing Data with Bases
Просмотров 5573 года назад
Representing Data with Bases
Subspaces in Machine Learning
Просмотров 1,2 тыс.3 года назад
Subspaces in Machine Learning
Uniqueness of Solutions to Learning Problems
Просмотров 5383 года назад
Uniqueness of Solutions to Learning Problems
Linear Independence and Rank in Learning
Просмотров 3,3 тыс.3 года назад
Linear Independence and Rank in Learning
Patterns in Data and Outer Products
Просмотров 6683 года назад
Patterns in Data and Outer Products
Classifying Data and Matrix Multiplication
Просмотров 5393 года назад
Classifying Data and Matrix Multiplication
Fitting Models to Data and Matrix Multiplication
Просмотров 7073 года назад
Fitting Models to Data and Matrix Multiplication

Комментарии

  • @chaoxi8966
    @chaoxi8966 14 дней назад

    Thanks sir, do you have some code example on this?

  • @suntech7094
    @suntech7094 18 дней назад

    great

  • @Kainnable
    @Kainnable 29 дней назад

    Great video. You showed everything. I suspect some may need to know that the amplitude comes from the square root of the sum of the squares of x and y. They may also need to know that the ArcTan (y/x) gives the angle...

  • @jameshopkins3541
    @jameshopkins3541 Месяц назад

    THERE ARE A LOT OF THIS TYPE OF UNUSEFUL VIDEOS

  • @andou_ryuu3205
    @andou_ryuu3205 Месяц назад

    I wish my professor even explained 10% of this as effectively

  • @jameshopkins3541
    @jameshopkins3541 Месяц назад

    SI no vas a explicar y demosrar nada. PARA q haces videos. PARA ganarvisitas. ?????

  • @jameshopkins3541
    @jameshopkins3541 Месяц назад

    AT LEAST USE SUBTITLE'S. DONT DOIT LIKE A SCHOOLKID

  • @jameshopkins3541
    @jameshopkins3541 Месяц назад

    I THINK IS UNUSEFUL TO TRY TO UNDERSTAND THIS EGIP

  • @jameshopkins3541
    @jameshopkins3541 Месяц назад

    DO IT IN PDF

  • @jameshopkins3541
    @jameshopkins3541 Месяц назад

    NOLIKE FOR UGLY KIDDING GRAPHICS

  • @jameshopkins3541
    @jameshopkins3541 Месяц назад

    REDOIT BUT WELL USING BIG IMAGES

  • @jameshopkins3541
    @jameshopkins3541 Месяц назад

    WHY NO EXAMPLE????????? BECAUSE IT DOESN WORK

  • @jameshopkins3541
    @jameshopkins3541 Месяц назад

    CAN YOU EXPLAIN SOMETHING ABOUT THE ALGOFFT????????

  • @ibissantananavarro7586
    @ibissantananavarro7586 Месяц назад

    Refresh long a go known thank you.

  • @tonyxu4310
    @tonyxu4310 Месяц назад

    It's really helpful for me, thanks!

  • @r410a8
    @r410a8 Месяц назад

    What is the formula of u[n] in 3:26 such the a variable's value determes the value of x[n] such a way in each case,i don't understand. And why in 4:36 when you aply the z transform formula on to x[n] you write Sum n-oo to +oo a^n*z^-n instead of Sum -oo to +oo a^n*u[n]*z^-n suposed the x[n] is a^n*u[n] not only a^n ? I don't understand this neither.

  • @abnereliberganzahernandez6337
    @abnereliberganzahernandez6337 2 месяца назад

    you sck bro

  • @theoryandapplication7197
    @theoryandapplication7197 2 месяца назад

    thanks sir

  • @bachkhoa1975
    @bachkhoa1975 2 месяца назад

    This is a good overview of FFT. It would be nice to explain how the DFT convolution sum is derived. Also, the de-interlacing of the inputs was glossed over (not explained clearly) but only the reversed binary notation was mentioned (this is just an after-the-fact observation of How, not an explanation of Why). Readers who dive deeper into the splitting of a larger N-point FFT into two smaller N/2-point FFT’s, or understand the relationships between the twiddle factors (and their periodic nature) would understand and retain better the FFT technique (and be able to conquer any arbitrary size of N-point FFT (N being a power of 2, of course).

  • @jeevanraju8834
    @jeevanraju8834 2 месяца назад

    your the best sir . thanks it really helpedfor my exam :)

  • @smesui1799
    @smesui1799 2 месяца назад

    Excellent !

  • @VolumetricTerrain-hz7ci
    @VolumetricTerrain-hz7ci 2 месяца назад

    There are unknown way to visualize subspace, or vector spaces. You can stretching the width of the x axis, for example, in the right line of a 3d stereo image, and also get depth, as shown below. L R |____| |______| TIP: To get the 3d depth, close one eye and focus on either left or right line, and then open it. This because the z axis uses x to get depth. Which means that you can get double depth to the image.... 4d depth??? :O p.s You're good teacher!

  • @PankajSingh-dc2qp
    @PankajSingh-dc2qp 2 месяца назад

    PFE @6:50 is wrong. The residues are 6/5 and 4/5.

  • @PankajSingh-dc2qp
    @PankajSingh-dc2qp 2 месяца назад

    @ 3:50 the direction of the vector is wrong... it should be in the opposite direction

  • @PankajSingh-dc2qp
    @PankajSingh-dc2qp 2 месяца назад

    @ 1:11 the product in pole-zero form should start from k=1.

  • @adrenochromeaddict4232
    @adrenochromeaddict4232 2 месяца назад

    great video. short, straight to the point

  • @eng.ameeryaseen1602
    @eng.ameeryaseen1602 2 месяца назад

    plz.can you support my (what is the conditions of the linearity in phase for FIR filters. Enhance your answer with formulas)

  • @ONoesBird
    @ONoesBird 3 месяца назад

    Beautiful explanation! Loved it.

  • @tuongnguyen9391
    @tuongnguyen9391 3 месяца назад

    Why the link to all signal processing has die

  • @kottapallisaiswaroop9849
    @kottapallisaiswaroop9849 3 месяца назад

    Could you please make a video on Fast Iterative shrinkage threshold Algorithm and denoise an audio signal using it.

  • @ushamemoriya5391
    @ushamemoriya5391 3 месяца назад

    Consider an at rest linear system described by y"+25y=2sint+5cos 5t The response of this system will be Decaying oscillations in time. Oscillatory in time. Growing oscillations in time: None of the above.

  • @user-xk5rx9xe6w
    @user-xk5rx9xe6w 3 месяца назад

    15:33 ♡♡

  • @user-on4yf8tj1c
    @user-on4yf8tj1c 3 месяца назад

    Hello, I need help clarifying two concepts. First, 4:31 makes perfect sense to me. We're just using the definition of Euler's Identity; if we wanted to re-expand back to regular Acos(x) + Aisin(x) notation, and our function didn't have an imaginary component, the second term would go to zero. I.e., I can represent any sinusoid with Euler - even if it doesn't have an imaginary component. This is nice because we can break up our sinusoid into time dependent & independent components. This makes perfect sense. Then, at 4:57 we seem to transition into something completely different. I understand the math for both, but I don't understand why I would use 4:57 over 4:31? What was wrong with just using Euler's Identity like 4:31? For example, at 7:59 I could have just as well used Euler's Identity like done at 4:31 instead of using this cos definition. Could you please help me connect these two ideas? Thank you, sir.

  • @komuna5984
    @komuna5984 4 месяца назад

    Thanks a lot for this awesome content!

  • @mrtoast244
    @mrtoast244 4 месяца назад

    This video is so underrated, it's literally the most straight forward explanation of this topic I've seen

  • @georgyurumov8095
    @georgyurumov8095 4 месяца назад

    Brilliant thank you you

  • @AlexAlex-fo9gt
    @AlexAlex-fo9gt 5 месяцев назад

    4:35-5:40 In my calculation of DFT X[2]=-1, not 0. Is the picture of DFM 7:36 right?

  • @tomoliveri6251
    @tomoliveri6251 5 месяцев назад

    I know some of these words

  • @gynxrm2237
    @gynxrm2237 5 месяцев назад

    the best i have watched so far..

  • @paedrufernando2351
    @paedrufernando2351 5 месяцев назад

    will it be online?I could attend online as I am based out of India

  • @PikaGMS
    @PikaGMS 5 месяцев назад

    i just want to say that i love you man

  • @mariacedeno3068
    @mariacedeno3068 5 месяцев назад

    this is great :) Thanks!!

  • @HKHasty
    @HKHasty 6 месяцев назад

    Awesome channel! Really helping me through advanced DSP!

  • @Net_Flux
    @Net_Flux 6 месяцев назад

    Not sure why you left out the recursive relation between the odd and even functions and the DFT. I was so confused where the speed gain was from.

  • @jimmy21584
    @jimmy21584 6 месяцев назад

    Came here because I’m reading the LoRA LLM paper. Thank you for the clear summary!

  • @user-wg7hu6xe1z
    @user-wg7hu6xe1z 6 месяцев назад

    after we get the optimized w, how to get the optimized b?

  • @user-hk7nf5gt4b
    @user-hk7nf5gt4b 6 месяцев назад

    On advantage of the DTFT is its ability to provide greater frequency resolution with a single dominant frequency than the DFT for a given N. One application could be trying to use a dsp to accurately estimate the frequency of a guitar string for tuning to the proper pitch. We wouldn't want our tuner to be limited to only 1/T. Also, the small battery powered DSP cannot do an very large FFT for more resolution. Can you derive or simulate the resolution enhancement limits with the W(e(-jw) convolution?

  • @user-hk7nf5gt4b
    @user-hk7nf5gt4b 6 месяцев назад

    One advantage of the DTFT is you get a continuous frequency domain. With a single dominating frequency, the peak frequency can be resolved with higher resolution than with the DFT with frequency samples of 1/T. Can you calculate or show a simulation of the limiting resolution of DTFT over the DFT based on your convolution of W(e^jw) factor? Give the same N.

  • @tuongnguyen9391
    @tuongnguyen9391 6 месяцев назад

    WHat happen to the website

  • @josecarlosribeiro3628
    @josecarlosribeiro3628 7 месяцев назад

    Congratulations Professor Van Veen for your ability and beautful presentation! Mary Christmans nas Happy New Year! Gos bless you! Jacareí -Sao Paulo-Brasil