From 325bbc1c05c0e6f2452fd179f7ef3aad5380fc77 Mon Sep 17 00:00:00 2001 From: Youwen Wu Date: Thu, 27 Feb 2025 20:53:38 -0800 Subject: [PATCH] auto-update(nvim): 2025-02-27 20:53:38 --- .../by-course/math-6a/course-notes/main.typ | 34 +++++++++++++++++-- 1 file changed, 32 insertions(+), 2 deletions(-) diff --git a/documents/by-course/math-6a/course-notes/main.typ b/documents/by-course/math-6a/course-notes/main.typ index 25e2415..66de73b 100644 --- a/documents/by-course/math-6a/course-notes/main.typ +++ b/documents/by-course/math-6a/course-notes/main.typ @@ -274,9 +274,11 @@ the directional derivative is zero. ] -= Midterm 2 review += Speedrun -I literally forgot everything...time to cram. +In this chapter I wrote up notes for the entirety of the course, starting from +week 1, ending at week 8, because I skipped 80% of the classes up to the +midterm. == Vector review @@ -1078,3 +1080,31 @@ ignore it. Now we just compare our four candidates and find the greatest (or least) for optimization! + +=== Notes from Week 7 section + +We have a function $f : RR^n -> RR$ that is subject to a constraint $g : RR^n -> RR^c$, where $c$ is our number of constraints. It's really a vector of $c$ constraints, +$ + g = vec(g_1,g_2,dots.v,g_c) +$ + +Idea: define the so-called *Lagrangian* $cal(L) = f + (g,lambda)$. + +#theorem[ + If $f$ and $g$ are "nice" (partials continuous), there are no redundant constraints, and it's not overconstrained ($"Rank" Dif g = c < n$). Then any optimal solution that respects $g = 0$ solves $gradient f = lambda dot Dif g$. +] + += Lecture #datetime(day: 27, year: 2025, month:2).display() + +== Volume + +Any 3D shape can be built recursively of atomic objects. + +#exercise[ + Derive formulae for the volume of a pyramid and cone. +] + +Schley what are you doing??? + +== Signed area and volume +