<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Home on cardona.ai</title><link>https://cardona.ai/</link><description>Recent content in Home on cardona.ai</description><generator>Hugo -- gohugo.io</generator><language>en-us</language><lastBuildDate>Tue, 29 Aug 2023 17:41:21 +0200</lastBuildDate><atom:link href="https://cardona.ai/index.xml" rel="self" type="application/rss+xml"/><item><title>Implementing automatic differentiation from scratch</title><link>https://cardona.ai/posts/seemore-2/</link><pubDate>Tue, 29 Aug 2023 17:41:21 +0200</pubDate><guid>https://cardona.ai/posts/seemore-2/</guid><description>&lt;p>So here you are, writing some PyTorch when you mindlessly call &lt;code>loss.backward()&lt;/code> and, out of nowhere, you get the gradient of the loss with respect to all your parameters: just what you needed to improve your model. A bit fishy, isn&amp;rsquo;t it? What exactly is going on in here?&lt;/p>
&lt;p>Well, long story short: you invoked &lt;code>autograd&lt;/code>, PyTorch&amp;rsquo;s automatic differentiation package, and it took care of all the computations needed. In fact, it started taking care of it long before you realized! Today we will build a simple version of &lt;code>autograd&lt;/code> to understand which kind of magic it is using. Let&amp;rsquo;s go!&lt;/p></description></item><item><title>So, why automatic differentiation?</title><link>https://cardona.ai/posts/seemore-1/</link><pubDate>Sat, 26 Aug 2023 17:41:21 +0200</pubDate><guid>https://cardona.ai/posts/seemore-1/</guid><description>&lt;h2 id="deep-what">Deep what?&lt;/h2>
&lt;blockquote>
&lt;p>&lt;em>This post is meant as an introduction, and will brush past some basic topics of neural networks as an exception. If you&amp;rsquo;re already familiar with the basics of deep learning and the math involved, feel free to skip this post and jump straight to &lt;a href="https://cardona.ai/posts/seemore-2/" title="Implementing automatic differentiation from scratch">here&lt;/a>.&lt;/em>&lt;/p>
&lt;/blockquote>
&lt;p>Deep learning is concerned with deep artificial neural networks. In essence, a neural network is determined by its &lt;strong>parameters&lt;/strong> (values stored within the network that are used for computation) and &lt;strong>architecture&lt;/strong> (how these parameters interact with each other).&lt;/p></description></item><item><title>seemore Hub</title><link>https://cardona.ai/posts/seemore-hub/</link><pubDate>Thu, 10 Aug 2023 17:41:21 +0200</pubDate><guid>https://cardona.ai/posts/seemore-hub/</guid><description>&lt;p>&lt;strong>seemore&lt;/strong> is an educational project that revisits some of the basics of deep learning for computer vision by focusing on implementation and on the theoretical motivation of design choices. As such, a moderate level of familiarity with deep learning is recommended, as we will take much for granted. It is conceptually based on Andrej Karpathy&amp;rsquo;s &lt;a href="https://github.com/karpathy/makemore/tree/master">makemore&lt;/a>, which covers natural language processing instead.&lt;/p>
&lt;p>As is tradition, we will do classification on the MNIST dataset, composed of handwritten digits, by considering increasingly complex architectures. Below you can find the roadmap (links are added as new posts are out):&lt;/p></description></item><item><title>About me</title><link>https://cardona.ai/pages/about/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://cardona.ai/pages/about/</guid><description>&lt;hr>
&lt;h1 id="work-in-progress-">Work in progress 🔨&lt;/h1></description></item></channel></rss>