dhruv's space (Zine companion posts)https://dhruvs.space/These posts are specifically written to complement zines.enContents © 2020 <a href="mailto:dhruvt93@gmail.com">Dhruv Thakur</a> Fri, 14 Feb 2020 22:35:39 GMTNikola (getnikola.com)http://blogs.law.harvard.edu/tech/rss- ML Basics #4: Replace Negatives With Zeros!https://dhruvs.space/posts/ml-basics-issue-4/Dhruv Thakur<div class="cell border-box-sizing text_cell rendered"><div class="prompt input_prompt">
</div><div class="inner_cell">
<div class="text_cell_render border-box-sizing rendered_html">
<p>This is the fourth post in <a href="https://dhruvs.space/categories/ml-basics/">this</a> series on the basics of Machine Learning. These posts are intended to serve as companion pieces to <a href="https://dhruvs.space/zines/ml-basics-binary-classification/">this</a> zine on binary classification. In the <a href="https://dhruvs.space/posts/ml-basics-issue-3/">last</a> one, we learnt how adding hidden layers to a Multilayer Perceptron helps it learn increasingly complex decision boundaries. The MLPs used till now made use of the sigmoid function as the activation function. In this post, we'll move our focus to a much simpler kind of activation function: The Rectifier.</p>
<p>The rectifier was first demonstrated as a better activation function for training deep neural networks — as compared to sigmoid or hyperbolic tangent — by <a href="http://proceedings.mlr.press/v15/glorot11a/glorot11a.pdf">Xavier Glorot, Antoine Bordes, and Yoshua Bengio</a>.</p>
<p>A unit employing the rectifier is called a rectified linear unit (ReLU).</p>
<p><a href="https://dhruvs.space/posts/ml-basics-issue-4/">Read more…</a> (12 min remaining to read)</p></div></div></div>ml-basicszine-companionhttps://dhruvs.space/posts/ml-basics-issue-4/Wed, 04 Sep 2019 11:11:21 GMT
- ML Basics #3: More Layers!https://dhruvs.space/posts/ml-basics-issue-3/Dhruv Thakur<div class="cell border-box-sizing text_cell rendered"><div class="prompt input_prompt">
</div><div class="inner_cell">
<div class="text_cell_render border-box-sizing rendered_html">
<p>This is the third post in <a href="https://dhruvs.space/categories/ml-basics/">this</a> series on the basics of Machine Learning. These posts are intended to serve as companion pieces to <a href="https://dhruvs.space/zines/ml-basics-binary-classification/">this</a> zine on binary classification. In the <a href="https://dhruvs.space/posts/ml-basics-issue-2/">last</a> one, we learnt how a Multilayer Perceptron can be trained to non-linearly segment a dataset. We also saw how a simple artificial neuron forms the building block of a Multilayer Perceptron — or a neural network in general — which can learn much more complicated decision boundaries.</p>
<p>Let's move on to datasets that are harder to segment. One way to improve the learning capability of a MLP is to add more neurons in the form of hidden layers. In this post we'll explore MLPs with 2 hidden layers.</p>
<p><a href="https://dhruvs.space/posts/ml-basics-issue-3/">Read more…</a> (17 min remaining to read)</p></div></div></div>ml-basicszine-companionhttps://dhruvs.space/posts/ml-basics-issue-3/Thu, 29 Aug 2019 09:04:21 GMT
- ML Basics #2: Multilayer Perceptronhttps://dhruvs.space/posts/ml-basics-issue-2/Dhruv Thakur<div class="cell border-box-sizing text_cell rendered"><div class="prompt input_prompt">
</div><div class="inner_cell">
<div class="text_cell_render border-box-sizing rendered_html">
<p>This is the second post in <a href="https://dhruvs.space/categories/ml-basics/">this</a> series on the basics of Machine Learning. These posts are intended to serve as companion pieces to <a href="https://dhruvs.space/zines/ml-basics-binary-classification/">this</a> zine on binary classification. The <a href="https://dhruvs.space/posts/ml-basics-issue-1/">last</a> post detailed the functioning of an artificial neuron, and how it can be trained to linearly segment a dataset. However, most real world datasets are not linearly separable, which begs the question:</p>
<blockquote><p>What is the point of learning about a neuron?</p>
</blockquote>
<p>Well, by the end of this post, we'll see that a bunch of neurons, when stacked together, can learn to create powerful non-linear solution spaces. Let's see how that works.</p>
<p><a href="https://dhruvs.space/posts/ml-basics-issue-2/">Read more…</a> (13 min remaining to read)</p></div></div></div>ml-basicszine-companionhttps://dhruvs.space/posts/ml-basics-issue-2/Sun, 25 Aug 2019 11:35:21 GMT
- ML Basics #1: Start With A Neuronhttps://dhruvs.space/posts/ml-basics-issue-1/Dhruv Thakur<div class="cell border-box-sizing text_cell rendered"><div class="prompt input_prompt">
</div><div class="inner_cell">
<div class="text_cell_render border-box-sizing rendered_html">
<p>This is the first post in <a href="https://dhruvs.space/categories/ml-basics/">this</a> series on the basics of Machine Learning. These posts are intended to serve as companion pieces to <a href="https://dhruvs.space/zines/ml-basics-binary-classification/">this</a> zine on binary classification. The zine mainly focusses on the mathematical foundations of binary classification, while these posts include implementation and analysis of the concepts as well. My aim here is to create a comprehensive catalogue of ML concepts so that I can quickly refer to them in the future, as well as be of help to anybody in a position similar to mine.</p>
<p>This post complements the first segment in the zine: Linear Binary Classification. The idea is to have the content here supplement that in the zine.</p>
<p>The other posts in series can be found <a href="https://dhruvs.space/categories/ml-basics/">here</a>.</p>
<p><a href="https://dhruvs.space/posts/ml-basics-issue-1/">Read more…</a> (10 min remaining to read)</p></div></div></div>ml-basicszine-companionhttps://dhruvs.space/posts/ml-basics-issue-1/Thu, 15 Aug 2019 12:23:21 GMT