Vincent Granville's Posts - AnalyticBridge2020-10-27T16:31:42ZVincent Granvillehttps://www.analyticbridge.datasciencecentral.com/profile/VincentGranvillehttps://storage.ning.com/topology/rest/1.0/file/get/2191504775?profile=RESIZE_48X48&width=48&height=48&crop=1%3A1https://www.analyticbridge.datasciencecentral.com/profiles/blog/feed?user=vi0zmqyuk8ci&xn_auth=noThe Exponential Mean: Alternative to Classic Meanstag:www.analyticbridge.datasciencecentral.com,2020-08-30:2004291:BlogPost:3995572020-08-30T15:30:00.000ZVincent Granvillehttps://www.analyticbridge.datasciencecentral.com/profile/VincentGranville
<p>Given<span> </span><em>n</em><span> </span>observations<span> </span><i>x</i><span>1</span>, ..., x<em><span>n</span></em>, the generalized mean (also called<span> </span><em>power mean</em>) is defined as </p>
<p><a href="https://storage.ning.com/topology/rest/1.0/file/get/7573575664?profile=original" rel="noopener" target="_blank"><img class="align-center" src="https://storage.ning.com/topology/rest/1.0/file/get/7573575664?profile=RESIZE_710x"></img></a></p>
<p>The case<span> </span><em>p</em><span> </span>= 1 corresponds to the traditional arithmetic mean,…</p>
<p>Given<span> </span><em>n</em><span> </span>observations<span> </span><i>x</i><span>1</span>, ..., x<em><span>n</span></em>, the generalized mean (also called<span> </span><em>power mean</em>) is defined as </p>
<p><a href="https://storage.ning.com/topology/rest/1.0/file/get/7573575664?profile=original" target="_blank" rel="noopener"><img src="https://storage.ning.com/topology/rest/1.0/file/get/7573575664?profile=RESIZE_710x" class="align-center"/></a></p>
<p>The case<span> </span><em>p</em><span> </span>= 1 corresponds to the traditional arithmetic mean, while<span> </span><em>p</em><span> </span>= 0 yields the geometric mean, and<span> </span><em>p</em><span> </span>= -1 yields the harmonic mean. See<span> </span><a href="https://en.wikipedia.org/wiki/Generalized_mean" target="_blank" rel="noopener">here</a><span> </span>for details. This metric is favored by statisticians. It is a particular case of the<span> </span><a href="https://en.wikipedia.org/wiki/Quasi-arithmetic_mean" target="_blank" rel="noopener">quasi-arithmetic mean</a>. </p>
<p>Here I introduce another kind of mean called<span> </span><em>exponential mean</em>, also based on a parameter<span> </span><em>p</em>, that may have an appeal to data scientists and machine learning professionals. It is also a special case of the quasi-arithmetic mean. Though the concept is basic, there is very little if any literature about it. It is related to the<span> </span><a href="https://en.wikipedia.org/wiki/LogSumExp" target="_blank" rel="noopener">LogSumExp</a><span> </span>and the<span> </span><a href="https://en.wikipedia.org/wiki/Log_semiring" target="_blank" rel="noopener">Log semiring</a>. It is defined as follows:</p>
<p><a href="https://storage.ning.com/topology/rest/1.0/file/get/7573703674?profile=original" target="_blank" rel="noopener"><img src="https://storage.ning.com/topology/rest/1.0/file/get/7573703674?profile=RESIZE_710x" class="align-center"/></a></p>
<p>Here the logarithm is in base<span> </span><em>p</em>, with<span> </span><em>p</em><span> </span>positive. When<span> </span><em>p</em><span> </span>tends to 0,<span> </span><em>m<span>p</span></em><span> </span>is the minimum of the observations. When<span> </span><em>p</em><span> </span>tends to 1, it yields the classic arithmetic mean, and as<span> </span><em>p</em><span> </span>tends to infinity, it yields the maximum of the observations. </p>
<p><a href="https://storage.ning.com/topology/rest/1.0/file/get/7720192463?profile=original" target="_blank" rel="noopener"><img src="https://storage.ning.com/topology/rest/1.0/file/get/7720192463?profile=RESIZE_710x" class="align-center"/></a></p>
<p><strong>Content of this article</strong></p>
<ul>
<li>Advantages of the exponential mean</li>
<li>Illustration on a test data set</li>
<li>Important inequality</li>
<li>Doubly exponential mean</li>
</ul>
<p><em>Read the full article <a href="https://www.datasciencecentral.com/profiles/blogs/alternative-to-the-arithmetic-geometric-and-harmonic-means" target="_blank" rel="noopener">here</a>.</em> </p>Bernouilli Lattice Models - Connection to Poisson Processestag:www.analyticbridge.datasciencecentral.com,2020-06-05:2004291:BlogPost:3984012020-06-05T19:11:28.000ZVincent Granvillehttps://www.analyticbridge.datasciencecentral.com/profile/VincentGranville
<p>Bernouilli lattice processes may be one of the simplest examples of point processes, and can be used as an introduction to learn about more complex spatial processes that rely on advanced measure theory for their definition. In this article, we show the differences and analogies between Bernouilli lattice processes on the standard rectangular or hexagonal grid, and the Poisson process, including convergence of discrete lattice processes to continuous Poisson process, mainly in two…</p>
<p>Bernouilli lattice processes may be one of the simplest examples of point processes, and can be used as an introduction to learn about more complex spatial processes that rely on advanced measure theory for their definition. In this article, we show the differences and analogies between Bernouilli lattice processes on the standard rectangular or hexagonal grid, and the Poisson process, including convergence of discrete lattice processes to continuous Poisson process, mainly in two dimensions. We also illustrate that even though these lattice processes are purely random, they don't look random when seen with the naked eye. </p>
<p>We discuss basic properties such as the distribution of the number of points in any given area, or the distribution of the distance to the nearest neighbor. Bernouilli lattice processes have been used as models in financial problems. Most of the papers on this topic are hard to read, but here we discuss the concepts in simple English. Interesting number theory problems about sums of squares, deeply related to these lattice processes, are also discussed. Finally, we show how to identify if a particular realization is from a Bernouilli lattice process, a Poisson process, or a combination of both. </p>
<p>See below a realization of a Bernouilli process on the regular hexagonal lattice. The main feature of such a process is that the point locations are fixed, not random. But whether a point is "fired" or not (that is, marked in blue) is purely random and independent of whether any other point is fired or not. The probability for a point to be fired is a Bernouilly variable of parameter<span> </span><em>p</em>. </p>
<p><a href="https://storage.ning.com/topology/rest/1.0/file/get/5611712675?profile=original" target="_blank" rel="noopener"><img src="https://storage.ning.com/topology/rest/1.0/file/get/5611712675?profile=RESIZE_710x" class="align-center"/></a></p>
<p style="text-align: center;"><strong>Figure 1</strong>: realization of Bernouilli hexagonal lattice process</p>
<p>More sophisticated models, known as Markov random fields, allows for neighboring points to be correlated. They are useful in image analysis.</p>
<p>To the contrary, Poisson processes assume that the point locations are random. The points being fired are uniformly distributed on the plane, and not restricted to integer or grid coordinates. In short, Bernouilli lattice processes are discrete approximations to Poisson processes. Below is an example of a realization of a Poisson process.</p>
<p><em><a href="https://www.datasciencecentral.com/profiles/blogs/bernouilli-lattice-models-connection-to-poisson-processes" target="_blank" rel="noopener">Read the full article here</a>. </em></p>
<p><strong>Math / data science articles by the same author</strong>:</p>
<p>Here is a selection of articles pertaining to experimental math and data science:</p>
<ul>
<li><a href="https://www.datasciencecentral.com/profiles/blogs/free-book-statistics-new-foundations-toolbox-and-machine-learning" target="_blank" rel="noopener">Statistics: New Foundations, Toolbox, and Machine Learning Recipes</a></li>
<li><a href="https://www.datasciencecentral.com/profiles/blogs/fee-book-applied-stochastic-processes" target="_blank" rel="noopener">Applied Stochastic Processes</a></li>
<li><a href="https://www.datasciencecentral.com/profiles/blogs/new-probabilistic-approach-to-factoring-big-numbers" target="_blank" rel="noopener">New Probabilistic Approach to Factoring Big Numbers</a></li>
<li><a href="https://www.datasciencecentral.com/profiles/blogs/chaos-attractors-in-machine-learning-systems" target="_blank" rel="noopener">Variance, Attractors and Behavior of Chaotic Statistical Systems</a></li>
<li><a href="https://www.datasciencecentral.com/profiles/blogs/new-family-of-generalized-gaussian-distributions" target="_blank" rel="noopener">New Family of Generalized Gaussian Distributions</a></li>
<li><a href="https://www.datasciencecentral.com/profiles/blogs/a-strange-family-of-statistical-distributions" target="_blank" rel="noopener">A Strange Family of Statistical Distributions</a></li>
<li><a href="https://www.datasciencecentral.com/profiles/blogs/some-fun-with-the-golden-ratio-time-series-and-number-theory" target="_blank" rel="noopener">Some Fun with Gentle Chaos, the Golden Ratio, and Stochastic Number...</a></li>
<li><a href="https://www.datasciencecentral.com/profiles/blogs/fascinating-new-results-in-the-theory-of-randomness" target="_blank" rel="noopener">Fascinating New Results in the Theory of Randomness</a></li>
<li><a href="https://www.datasciencecentral.com/profiles/blogs/two-beautiful-mathematical-results-part-2" target="_blank" rel="noopener">Two Beautiful Mathematical Results - Part 2</a></li>
<li><a href="https://www.datasciencecentral.com/profiles/blogs/number-theory-nice-generalization-of-the-waring-conjecture" target="_blank" rel="noopener">Number Theory: Nice Generalization of the Waring Conjecture</a></li>
</ul>Explaining Data Science to a Non-Data Scientisttag:www.analyticbridge.datasciencecentral.com,2020-06-04:2004291:BlogPost:3984592020-06-04T23:05:13.000ZVincent Granvillehttps://www.analyticbridge.datasciencecentral.com/profile/VincentGranville
<p><strong><em>Summary:</em></strong><em> Explaining data science to a non-data scientist isn’t as easy as it sounds. You may know a lot about math, tools, techniques, data, and computer architecture but the question is how do you explain this briefly without getting buried in the detail. You might try this approach.</em></p>
<p> </p>
<p><a href="https://storage.ning.com/topology/rest/1.0/file/get/5521687477?profile=original" rel="noopener" target="_blank"><img class="align-right" src="https://storage.ning.com/topology/rest/1.0/file/get/5521687477?profile=RESIZE_710x" width="350"></img></a> We’ve all been…</p>
<p><strong><em>Summary:</em></strong><em> Explaining data science to a non-data scientist isn’t as easy as it sounds. You may know a lot about math, tools, techniques, data, and computer architecture but the question is how do you explain this briefly without getting buried in the detail. You might try this approach.</em></p>
<p> </p>
<p><a href="https://storage.ning.com/topology/rest/1.0/file/get/5521687477?profile=original" target="_blank" rel="noopener"><img src="https://storage.ning.com/topology/rest/1.0/file/get/5521687477?profile=RESIZE_710x" width="350" class="align-right"/></a>We’ve all been there. You’re at a party or maybe striking up a conversation with that pretty girl at the bar and sooner or later the question comes up, “what do you do?” Since you have what is reported to be the sexiest job in the world you proudly respond “I’m a data scientist”.</p>
<p>OK, what happens next depends on exactly what you say. Do your fellow party goers hang on your every word in anticipation? Do you, as they say, get the pretty girl’s digits? You respond:</p>
<p><em>“I’m working with deep neural nets with dozens of hidden layers on cloud based TPUs using Tensorflow. Right now I’m working to put bounding boxes around images of people so I can create multi-class deep learning models to predict their…”</em></p>
<p></p>
<p><a href="https://www.datasciencecentral.com/profiles/blogs/explaining-data-science-to-a-non-data-scientist" target="_blank" rel="noopener">Read the full article here</a>. </p>New Probabilistic Approach to Factoring Big Numberstag:www.analyticbridge.datasciencecentral.com,2020-05-27:2004291:BlogPost:3983812020-05-27T18:20:42.000ZVincent Granvillehttps://www.analyticbridge.datasciencecentral.com/profile/VincentGranville
<p>Product of two large primes are at the core of many encryption algorithms, as factoring the product is very hard for numbers with a few hundred digits. The two prime factors are associated with the encryption keys (public and private keys). Here we describe a new approach to factoring a big number that is the product of two primes of roughly the same size. It is designed especially to handle this problem and identify flaws in encryption algorithms. …</p>
<p></p>
<p>Product of two large primes are at the core of many encryption algorithms, as factoring the product is very hard for numbers with a few hundred digits. The two prime factors are associated with the encryption keys (public and private keys). Here we describe a new approach to factoring a big number that is the product of two primes of roughly the same size. It is designed especially to handle this problem and identify flaws in encryption algorithms. </p>
<p><a href="https://storage.ning.com/topology/rest/1.0/file/get/5397782876?profile=original" target="_blank" rel="noopener"><img src="https://storage.ning.com/topology/rest/1.0/file/get/5397782876?profile=RESIZE_710x" class="align-center"/></a></p>
<p style="text-align: center;"><em>Riemann zeta function in the complex plane</em></p>
<p>While at first glance it appears to substantially reduce the computational complexity of traditional factoring, at this stage there is still a lot of progress needed to make the new algorithm efficient. An interesting feature is that the success depends on the probability of two numbers to be co-prime, given the fact that they don't share the first few primes (say 2, 3, 5, 7, 11, 13) as common divisors. This probability can be computed explicitly and is about 99%. </p>
<p>The methodology relies heavily on solving systems of congruences, the Chinese Remainder Theorem, and the modular multiplicative inverse of some carefully chosen integers. We also discuss computational complexity issues. Finally, the off-the-beaten-path material presented here leads to many original exercises or exam questions for students learning probability, computer science, or number theory: proving the various simple statements made in my article. </p>
<p><strong>Content</strong></p>
<p>Some Number Theory Explained in Simple English</p>
<ul>
<li>Co-primes and pairwise co-primes</li>
<li>Probability of being co-prime</li>
<li>Modular multiplicative inverse</li>
<li>Chinese remainder theorem, version A</li>
<li>Chinese remainder theorem, version B</li>
</ul>
<p>The New Factoring Algorithm</p>
<ul>
<li>Improving computational complexity</li>
<li>Five-step algorithm</li>
<li>Probabilistic optimization</li>
<li>Compact Formulation of the Problem</li>
</ul>
<p><em>Read the full article <a href="https://www.datasciencecentral.com/profiles/blogs/new-probabilistic-approach-to-factoring-big-numbers" target="_blank" rel="noopener">here</a>. </em></p>
<p><span><strong>Other Math Articles by Same Author</strong></span></p>
<p>Here is a selection of articles pertaining to experimental math and probabilistic number theory:</p>
<ul>
<li><a href="https://www.datasciencecentral.com/profiles/blogs/free-book-statistics-new-foundations-toolbox-and-machine-learning" target="_blank" rel="noopener">Statistics: New Foundations, Toolbox, and Machine Learning Recipes</a></li>
<li><a href="https://www.datasciencecentral.com/profiles/blogs/fee-book-applied-stochastic-processes" target="_blank" rel="noopener">Applied Stochastic Processes</a></li>
<li><a href="https://www.datasciencecentral.com/profiles/blogs/chaos-attractors-in-machine-learning-systems" target="_blank" rel="noopener">Variance, Attractors and Behavior of Chaotic Statistical Systems</a></li>
<li><a href="https://www.datasciencecentral.com/profiles/blogs/new-family-of-generalized-gaussian-distributions" target="_blank" rel="noopener">New Family of Generalized Gaussian Distributions</a></li>
<li><a href="https://www.datasciencecentral.com/profiles/blogs/a-beautiful-result-in-probability-theory" target="_blank" rel="noopener">A Beautiful Result in Probability Theory</a></li>
<li><a href="https://www.datasciencecentral.com/profiles/blogs/two-new-deep-conjectures-in-probabilistic-number-theory" target="_blank" rel="noopener">Two New Deep Conjectures in Probabilistic Number Theory</a></li>
<li><a href="https://www.datasciencecentral.com/profiles/blogs/extreme-events-modeling-using-continued-fractions" target="_blank" rel="noopener">Extreme Events Modeling Using Continued Fractions</a></li>
<li><a href="https://www.datasciencecentral.com/profiles/blogs/a-strange-family-of-statistical-distributions" target="_blank" rel="noopener">A Strange Family of Statistical Distributions</a></li>
<li><a href="https://www.datasciencecentral.com/profiles/blogs/some-fun-with-the-golden-ratio-time-series-and-number-theory" target="_blank" rel="noopener">Some Fun with Gentle Chaos, the Golden Ratio, and Stochastic Number...</a></li>
<li><a href="https://www.datasciencecentral.com/profiles/blogs/fascinating-new-results-in-the-theory-of-randomness" target="_blank" rel="noopener">Fascinating New Results in the Theory of Randomness</a></li>
<li><a href="https://www.datasciencecentral.com/profiles/blogs/two-beautiful-mathematical-results-part-2" target="_blank" rel="noopener">Two Beautiful Mathematical Results - Part 2</a></li>
<li><a href="https://www.datasciencecentral.com/profiles/blogs/two-beautiful-mathematical-results" target="_blank" rel="noopener">Two Beautiful Mathematical Results</a></li>
<li><a href="https://www.datasciencecentral.com/profiles/blogs/number-theory-nice-generalization-of-the-waring-conjecture" target="_blank" rel="noopener">Number Theory: Nice Generalization of the Waring Conjecture</a></li>
<li><a href="https://www.datasciencecentral.com/profiles/blogs/amazing-random-sequences-with-cool-applications" target="_blank" rel="noopener">Fascinating Chaotic Sequences with Cool Applications</a></li>
<li><a href="https://www.datasciencecentral.com/profiles/blogs/simple-proof-of-prime-number-theorem" target="_blank" rel="noopener">Simple Proof of the Prime Number Theorem</a></li>
<li><a href="https://www.datasciencecentral.com/profiles/blogs/factoring-massive-numbers-a-new-machine-learning-approach" target="_blank" rel="noopener">Factoring Massive Numbers: Machine Learning Approach</a></li>
</ul>Simple Trick to Dramatically Improve Speed of Convergencetag:www.analyticbridge.datasciencecentral.com,2020-05-05:2004291:BlogPost:3981632020-05-05T23:37:33.000ZVincent Granvillehttps://www.analyticbridge.datasciencecentral.com/profile/VincentGranville
<p>We discuss a simple trick to significantly accelerate the convergence of an algorithm when the error term decreases in absolute value over successive iterations, with the error term oscillating (not necessarily periodically) between positive and negative values. </p>
<p>We first illustrate the technique on a well known and simple case: the computation of log 2 using its well know, slow-converging series. We then discuss a very interesting and more complex case, before finally focusing on a…</p>
<p>We discuss a simple trick to significantly accelerate the convergence of an algorithm when the error term decreases in absolute value over successive iterations, with the error term oscillating (not necessarily periodically) between positive and negative values. </p>
<p>We first illustrate the technique on a well known and simple case: the computation of log 2 using its well know, slow-converging series. We then discuss a very interesting and more complex case, before finally focusing on a more challenging example in the context of probabilistic number theory and experimental math.</p>
<p>The technique must be tested for each specific case to assess the improvement in convergence speed. There is no general, theoretical rule to measure the gain, and if the error term does not oscillate in a balanced way between positive and negative values, this technique does not produce any gain. However, in the examples below, the gain was dramatic. </p>
<p><a href="https://storage.ning.com/topology/rest/1.0/file/get/4768157670?profile=original" target="_blank" rel="noopener"><img src="https://storage.ning.com/topology/rest/1.0/file/get/4768157670?profile=RESIZE_710x" class="align-center"/></a></p>
<p>Let's say you run an algorithm, for instance gradient descent. The input (model parameters) is<span> </span><em>x</em>, the output if<span> </span><em>f</em>(<em>x</em>), for instance a local optimum. We consider<span> </span><em>f</em>(<em>x</em>) to be univariate, but it easily generalizes to the multivariate case, by applying the technique separately for each component. At iteration<span> </span><em>k</em>, you obtain an approximation<span> </span><em>f</em>(<em>k</em>,<span> </span><em>x</em>) of<span> </span><em>f</em>(<em>x</em>), and the error is<span> </span><em>E</em>(<em>k</em>,<span> </span><em>x</em>) =<span> </span><em>f</em>(<em>x</em>) -<span> </span><em>f</em>(<em>k</em>,<span> </span><em>x</em>). The total number of iterations is<span> </span><em>N</em>. starting with first iteration<span> </span><em>k</em><span> </span>= 1. </p>
<p>The idea consists in first running the algorithm as is, and then compute the "smoothed" approximations, using the following<span> </span><em>m</em><span> </span>steps.</p>
<p><a href="https://www.datasciencecentral.com/profiles/blogs/simple-trick-to-dramatically-improve-speed-of-convergence" target="_blank" rel="noopener">Read the full article here</a>.</p>
<p><strong>Content</strong></p>
<ul>
<li>General framework and simple illustration</li>
<li>A strange function</li>
<li>Even stranger functions</li>
</ul>
<p></p>State-of-the-Art Statistical Science to Tackle Famous Number Theory Conjecturestag:www.analyticbridge.datasciencecentral.com,2020-03-01:2004291:BlogPost:3971512020-03-01T06:00:00.000ZVincent Granvillehttps://www.analyticbridge.datasciencecentral.com/profile/VincentGranville
<p>The methodology described here has broad applications, leading to new statistical tests, new type of ANOVA (analysis of variance), improved design of experiments, interesting fractional factorial designs, a better understanding of irrational numbers leading to cryptography, gaming and Fintech applications, and high quality random numbers generators (and when you really need them). It also features exact arithmetic / high performance computing and distributed algorithms to compute millions of…</p>
<p>The methodology described here has broad applications, leading to new statistical tests, new type of ANOVA (analysis of variance), improved design of experiments, interesting fractional factorial designs, a better understanding of irrational numbers leading to cryptography, gaming and Fintech applications, and high quality random numbers generators (and when you really need them). It also features exact arithmetic / high performance computing and distributed algorithms to compute millions of binary digits for an infinite family of real numbers, including detection of auto- and cross-correlations (or lack of) in the digit distributions.</p>
<p><a href="https://storage.ning.com/topology/rest/1.0/file/get/3972061349?profile=original" target="_blank" rel="noopener"><img src="https://storage.ning.com/topology/rest/1.0/file/get/3972061349?profile=RESIZE_710x" class="align-center"/></a></p>
<p>The data processed in my experiment, consisting of raw irrational numbers (described by a new class of elementary recurrences) led to the discovery of unexpected apparent patterns in their digit distribution: in particular, the fact that a few of these numbers, contrarily to popular belief, do not have 50% of their binary digits equal to 1. It turned out that perfectly random digits simulated in large numbers, with a good enough pseudo-random generator, also exhibit the same strange behavior, pointing to the fact that pure randomness may not be as random as we imagine it is. Ironically, failure to exhibit these patterns would be an indicator that there really is a departure from pure randomness in the digits in question.</p>
<p>In addition to new statistical / mathematical methods and discoveries and interesting applications, you will learn in my article how to avoid this type of statistical traps that lead to erroneous conclusions, when performing a large number of statistical tests, and how to not be misled by false appearances. I call them<span> </span><em>statistical hallucinations</em> and<span> </span><em>false outliers</em>.</p>
<p>This article has two main sections: section 1, with deep research in number theory, and section 2, with deep research in statistics, with applications. You may skip one of the two sections depending on your interests and how much time you have. Both sections, despite state-of-the-art in their respective fields, are written in simple English. It is my wish that with this article, I can get data scientists to be interested in math, and the other way around: the topics in both cases have been chosen to be exciting and modern. I also hope that this article will give you new powerful tools to add to your arsenal of tricks and techniques. Both topics are related, the statistical analysis being based on the numbers discussed in the math section. </p>
<p>One of the interesting new topics discussed here for the first time is the cross-correlation between the digits of two irrational numbers. These digit sequences are treated as multivariate time series. I believe this is the first time ever that this subject is not only investigated in detail, but in addition comes with a deep, spectacular probabilistic number theory result about the distributions in question, with important implications in security and cryptography systems. Another related topic discussed here is a generalized version of the Collatz conjecture, with some insights on how to potentially solve it.</p>
<p><a href="https://www.datasciencecentral.com/profiles/blogs/state-of-the-art-statistical-science-to-address-famous-number-the" target="_blank" rel="noopener">Read the full article here</a>. </p>
<p><strong>Content</strong></p>
<p>1. On the Digits Distribution of Quadractic Irrational Numbers</p>
<ul>
<li>Properties of the recursion</li>
<li>Reverse recursion</li>
<li>Properties of the reverse recursion</li>
<li>Connection to Collatz conjecture</li>
<li>Source code</li>
<li>New deep probabilistic number theory results</li>
<li>Spectacular new result about cross-correlations</li>
<li>Applications</li>
</ul>
<p>2. New Statistical Techniques Used in Our Analysis</p>
<ul>
<li>Data, features, and preliminary analysis</li>
<li>Doing it the right way</li>
<li>Are the patterns found a statistical illusion, or caused by errors, or real?</li>
<li>Pattern #1: Non-Gaussian behavior</li>
<li>Pattern #2: Illusionary outliers</li>
<li>Pattern #3: Weird distribution for block counts</li>
<li>Related articles and books</li>
</ul>
<p>Appendix</p>Advanced Analytic Platforms – Changes in the Leaderboard 2020tag:www.analyticbridge.datasciencecentral.com,2020-02-21:2004291:BlogPost:3969732020-02-21T16:25:05.000ZVincent Granvillehttps://www.analyticbridge.datasciencecentral.com/profile/VincentGranville
<p><strong><em>Summary:</em></strong><span> </span><em>The Gartner Magic Quadrant for Data Science and Machine Learning Platforms is just out the big news is how much more capable all the platforms have become. Of course there are also some interesting winner and loser stories.</em></p>
<p>The Gartner Magic Quadrant for Data Science and Machine Learning Platforms is just out for 2020. The really big news is how many excellent choices are now available. In a remarkable move, the whole field…</p>
<p><strong><em>Summary:</em></strong><span> </span><em>The Gartner Magic Quadrant for Data Science and Machine Learning Platforms is just out the big news is how much more capable all the platforms have become. Of course there are also some interesting winner and loser stories.</em></p>
<p>The Gartner Magic Quadrant for Data Science and Machine Learning Platforms is just out for 2020. The really big news is how many excellent choices are now available. In a remarkable move, the whole field of competitors has moved strongly up and to the right offering more and more Leaders or near-leader Visionaries than ever before.</p>
<p>It’s a mark of maturity in our industry that so many platforms offer fully capable model development, operationalizing, and management features. That list of requirements as defined by Gartner grows longer every year and earning a better rating requires increasing capability and increasing customer satisfaction.</p>
<p><a href="https://storage.ning.com/topology/rest/1.0/file/get/3886789662?profile=original" target="_blank" rel="noopener"><img src="https://storage.ning.com/topology/rest/1.0/file/get/3886789662?profile=RESIZE_710x" class="align-center"/></a></p>
<p><span><strong>What Are the Major Changes?</strong></span></p>
<p>As in previous years we’ve charted the major changes in position using green arrows for improvement and red arrows to indicate a reduced rating. The blue dots are current ratings and the gray dots are from a year ago.</p>
<p><em>Read the full article <a href="https://www.datasciencecentral.com/profiles/blogs/advanced-analytic-platforms-changes-in-the-leaderboard-2020" target="_blank" rel="noopener">here</a> with the 2020 version of the above chart, with comments.</em></p>Sentiment Analysis with Naive Bayes and LSTMtag:www.analyticbridge.datasciencecentral.com,2020-02-20:2004291:BlogPost:3969662020-02-20T03:42:19.000ZVincent Granvillehttps://www.analyticbridge.datasciencecentral.com/profile/VincentGranville
<p class="justifyfull" dir="ltr"><span>In this notebook, we try to predict the positive (label 1) or negative (label 0) sentiment of the sentence. We use the UCI Sentiment Labelled Sentences Data Set</span><span>.</span></p>
<p class="justifyfull" dir="ltr"><span>Sentiment analysis is very useful in many areas. For example, it can be used for internet conversations moderation. Also, it is possible to predict ratings that users can assign to a certain product (food, household appliances, hotels,…</span></p>
<p class="justifyfull" dir="ltr"><span>In this notebook, we try to predict the positive (label 1) or negative (label 0) sentiment of the sentence. We use the UCI Sentiment Labelled Sentences Data Set</span><span>.</span></p>
<p class="justifyfull" dir="ltr"><span>Sentiment analysis is very useful in many areas. For example, it can be used for internet conversations moderation. Also, it is possible to predict ratings that users can assign to a certain product (food, household appliances, hotels, films, etc) based on the reviews.</span></p>
<p class="justifyfull" dir="ltr"><span><a href="https://storage.ning.com/topology/rest/1.0/file/get/3873882138?profile=original" target="_blank" rel="noopener"><img src="https://storage.ning.com/topology/rest/1.0/file/get/3873882138?profile=RESIZE_710x" class="align-center"/></a></span></p>
<p class="justifyfull" dir="ltr"><span>In this notebook we are using two families of machine learning algorithms</span><span>: Naive Bayes (NB) and</span><span> </span><span>long short term memory (LSTM) neural networks</span><span>.</span></p>
<ul>
<li dir="ltr"><p dir="ltr">AYLIEN</p>
</li>
<li dir="ltr"><p dir="ltr">Deeplearning4j</p>
</li>
<li dir="ltr"><p dir="ltr">Understanding LSTM Networks</p>
</li>
<li dir="ltr"><p dir="ltr">Empirical Evaluation of Gated Recurrent Neural Networks on Sequence Modeling<span> </span><span> </span></p>
</li>
<li><p dir="ltr">The Unreasonable Effectiveness of Recurrent Neural Networks</p>
</li>
</ul>
<p class="justifyfull" dir="ltr"><span>We will use pandas, numpy for data manipulation, nltk for natural language processing, matplotlib, seaborn and plotly for data visualization, sklearn and keras for learning the models.</span></p>
<p class="justifyfull" dir="ltr"><em>Read the full article with source code and illustrations, <a href="https://www.datasciencecentral.com/profiles/blogs/sentiment-analysis-with-naive-bayes-and-lstm" target="_blank" rel="noopener">here</a>. </em></p>Common Errors in Machine Learning due to Poor Statistics Knowledgetag:www.analyticbridge.datasciencecentral.com,2020-02-07:2004291:BlogPost:3967692020-02-07T16:48:30.000ZVincent Granvillehttps://www.analyticbridge.datasciencecentral.com/profile/VincentGranville
<p>Probably the worst error is thinking there is a correlation when that correlation is purely artificial. Take a data set with 100,000 variables, say with 10 observations. Compute all the (99,999 * 100,000) / 2 cross-correlations. You are almost guaranteed to find one above 0.999. This is best illustrated in may article<span> </span><a href="https://www.datasciencecentral.com/profiles/blogs/how-to-lie-with-p-values" rel="noopener" target="_blank">How to Lie with P-values</a> (also discussing…</p>
<p>Probably the worst error is thinking there is a correlation when that correlation is purely artificial. Take a data set with 100,000 variables, say with 10 observations. Compute all the (99,999 * 100,000) / 2 cross-correlations. You are almost guaranteed to find one above 0.999. This is best illustrated in may article<span> </span><a href="https://www.datasciencecentral.com/profiles/blogs/how-to-lie-with-p-values" target="_blank" rel="noopener">How to Lie with P-values</a> (also discussing how to handle and fix it.)</p>
<p><a href="https://storage.ning.com/topology/rest/1.0/file/get/3852501387?profile=original" target="_blank" rel="noopener"><img src="https://storage.ning.com/topology/rest/1.0/file/get/3852501387?profile=RESIZE_710x" class="align-center"/></a></p>
<p>This is being done on such a large scale, I think it is probably the main cause of fake news, and the impact is disastrous on people who take for granted what they read in the news or what they hear from the government. Some people are sent to jail based on evidence tainted with major statistical flaws. Government money is spent, propaganda is generated, wars are started, and laws are created based on false evidence. Sometimes the data scientist has no choice but to knowingly cook the numbers to keep her job. Usually, these “bad stats” end up being featured in beautiful but faulty visualizations: axes are truncated, charts are distorted, observations and variables are carefully chosen just to make a (wrong) point.</p>
<p><a href="https://www.datasciencecentral.com/profiles/blogs/common-errors-in-machine-learning-due-to-poor-statistics-knowledg" target="_blank" rel="noopener">Read the full article here</a>. </p>
<p><strong>Related articles</strong></p>
<ul>
<li><a href="https://www.datasciencecentral.com/profiles/blogs/how-to-lie-with-p-values" target="_blank" rel="noopener">How to Lie with P-values</a></li>
<li><a href="https://www.datasciencecentral.com/profiles/blogs/four-types-of-data-scientist" target="_blank" rel="noopener">Four Types of Data Scientist</a></li>
<li><a href="https://www.bigdatanews.datasciencecentral.com/profiles/blogs/debunking-forbes-article-about-the-death-of-the-data-scientist" target="_blank" rel="noopener">Debunking Forbes Article about the Death of the Data Scientist</a></li>
<li><a href="https://www.datasciencecentral.com/profiles/blogs/why-you-should-be-a-data-science-generalist" target="_blank" rel="noopener">Why You Should be a Data Science Generalist - and How to Become One</a></li>
<li><a href="https://www.datasciencecentral.com/profiles/blogs/becoming-a-billionaire-data-scientist-vs-struggling-to-get-a-100k">Becoming a Billionaire Data Scientist vs Struggling to Get a $100k Job<span> </span></a></li>
<li><a href="https://www.datasciencecentral.com/profiles/blogs/opinion-is-a-phd-helpful-for-a-data-science-career" target="_blank" rel="noopener">Is a PhD helpful for a data science career?</a></li>
<li><a href="https://www.datasciencecentral.com/profiles/blogs/if-data-science-is-in-demand-why-is-it-so-hard-to-get-a-job" target="_blank" rel="noopener">If data science is in demand, why is it so hard to get a job?</a></li>
<li><a href="https://www.datasciencecentral.com/profiles/blogs/why-do-people-with-no-experience-want-to-become-data-scientists" target="_blank" rel="noopener">Why do people with no experience want to become data scientists?</a></li>
<li><a href="https://www.datasciencecentral.com/profiles/blogs/why-is-becoming-a-data-scientist-so-difficult" target="_blank" rel="noopener">Why is Becoming a Data Scientist so Difficult?</a></li>
<li><a href="https://www.datasciencecentral.com/profiles/blogs/full-stack-data-scientist-the-elusive-unicorn-and-data-hacker" target="_blank" rel="noopener">Full Stack Data Scientist: The Elusive Unicorn and Data Hacker</a></li>
<li><a href="https://www.datasciencecentral.com/profiles/blogs/statistical-significance-and-p-values-take-another-blow" target="_blank" rel="noopener">Statistical Significance and p-Values Take Another Blow</a></li>
<li><a href="https://www.datasciencecentral.com/forum/topics/are-data-science-or-stats-curricula-in-us-too-specialized" target="_blank" rel="noopener">Are data science or stats curricula in US too specialized?</a></li>
<li><a href="https://www.datasciencecentral.com/profiles/blogs/how-do-you-identify-an-actual-data-scientist" target="_blank" rel="noopener">How do you identify an actual data scientist?</a></li>
<li><a href="https://www.datasciencecentral.com/profiles/blogs/is-it-still-possible-today-to-become-a-self-taught-data-scientist" target="_blank" rel="noopener">Is it still possible today to become a self-taught data scientist?</a></li>
<li><a href="https://www.datasciencecentral.com/profiles/blogs/will-the-job-outlook-for-data-scientists-severely-decline-after-2" target="_blank" rel="noopener">Will the job outlook for data scientists severely decline after 2020?</a></li>
<li><a href="https://www.datasciencecentral.com/profiles/blogs/why-logistic-regression-should-be-the-last-thing-you-learn-when-b" target="_blank" rel="noopener">Why Logistic Regression should be the last thing you learn</a></li>
</ul>
<p><em>Source for picture: <a href="https://storage.ning.com/topology/rest/1.0/file/get/3852503404?profile=original" target="_blank" rel="noopener">here</a> </em></p>New Perspective on Fermat's Last Theoremtag:www.analyticbridge.datasciencecentral.com,2020-01-30:2004291:BlogPost:3964002020-01-30T08:09:04.000ZVincent Granvillehttps://www.analyticbridge.datasciencecentral.com/profile/VincentGranville
<p>Fermat's last conjecture has puzzled mathematicians for 300 years, and was eventually proved only recently. In this note, I propose a generalization, that could actually lead to a much simpler proof and a more powerful result with broader applications, including to solve numerous similar equations. As usual, my research involves a significant amount of computations and experimental math, as an exploratory step before stating new conjectures, and eventually trying to prove them. The…</p>
<p>Fermat's last conjecture has puzzled mathematicians for 300 years, and was eventually proved only recently. In this note, I propose a generalization, that could actually lead to a much simpler proof and a more powerful result with broader applications, including to solve numerous similar equations. As usual, my research involves a significant amount of computations and experimental math, as an exploratory step before stating new conjectures, and eventually trying to prove them. The methodology is very similar to that used in data science, involving the following steps:</p>
<ol>
<li>Identify and process the data. Here the data set consists of all real numbers; it is infinite, which brings its own challenges. On the plus side, the data is public and accessible to everyone, though very powerful computation techniques are required, usually involving a distributed architecture. </li>
<li>Data cleaning: in this case, inaccuracies are caused by no using enough precision; the solution consists of finding better / faster algorithms for your computations, and sometimes having to work with exact arithmetic, using<span> </span><a href="https://www.datasciencecentral.com/forum/topics/question-how-precision-computing-in-python" target="_blank" rel="noopener">Bignum libraries</a>.</li>
<li>Sample data and perform exploratory analysis to identify patterns. Formulate hypotheses. Perform statistical tests to validate (or not) these hypotheses. Then formulate conjectures based on this analysis. </li>
<li>Build models (about how your numbers seem to behave) and focus on models offering the best fit. Perform simulations based on your model, see if your numbers agree with your simulations, by testing on a much larger set of numbers. Discard conjectures that do not pass these tests.</li>
<li>Formally prove or disprove retained conjectures, when possible. Then write a conclusion if possible: in this case, a new, major mathematical theorem, showing potential applications. This last step is similar to data scientists presenting the main insights of their analysis, to a layman audience.</li>
</ol>
<p><a href="https://storage.ning.com/topology/rest/1.0/file/get/3840031367?profile=original" target="_blank" rel="noopener"><img src="https://storage.ning.com/topology/rest/1.0/file/get/3840031367?profile=RESIZE_710x" class="align-center"/></a></p>
<p style="text-align: center;"><em>See <a href="https://www.datasciencecentral.com/profiles/blogs/new-perspective-on-fermat-s-last-theorem?xg_source=activity" target="_blank" rel="noopener">full article</a> for explanations about this table (representing the number of solutions)</em></p>
<p>The motivation in this article is two-fold:</p>
<ul>
<li>Presenting a new path that can lead to new interesting results and theoretical research in mathematics (yet my writing style and content is accessible to the layman).</li>
<li>Offering data scientists and machine learning / AI practitioners (including newbies) an interesting framework to test their programming, discovery and analysis skills, using a huge (infinite) data set that has been available to everyone since the beginning of times, and applied to a fascinating problem. </li>
</ul>
<p><em>Read full article <a href="https://www.datasciencecentral.com/profiles/blogs/new-perspective-on-fermat-s-last-theorem?xg_source=activity" target="_blank" rel="noopener">here</a>. For more math-oriented articles, visit <a href="https://www.datasciencecentral.com/profiles/blogs/my-data-science-machine-learning-and-related-articles" target="_blank" rel="noopener">this page</a> (check the math section), or download my books, available <a href="https://www.datasciencecentral.com/profiles/blogs/new-books-and-resources-for-dsc-members" target="_blank" rel="noopener">here</a>.</em></p>Best Languages for Data Science and Statistics in One Picturetag:www.analyticbridge.datasciencecentral.com,2020-01-29:2004291:BlogPost:3963922020-01-29T03:41:02.000ZVincent Granvillehttps://www.analyticbridge.datasciencecentral.com/profile/VincentGranville
<p><span>Hundreds of programming languages dominate the data science and statistics market: Python, R, SAS and SQL are standouts. If you're looking to branch out and add a new programming language to your skill set, which one should you learn? This one picture breaks down the differences between the four languages.…</span></p>
<p></p>
<p><span><a href="https://storage.ning.com/topology/rest/1.0/file/get/3838310782?profile=original" rel="noopener" target="_blank"><img class="align-center" src="https://storage.ning.com/topology/rest/1.0/file/get/3838310782?profile=RESIZE_710x"></img></a></span></p>
<p><span>Hundreds of programming languages dominate the data science and statistics market: Python, R, SAS and SQL are standouts. If you're looking to branch out and add a new programming language to your skill set, which one should you learn? This one picture breaks down the differences between the four languages.</span></p>
<p></p>
<p><span><a href="https://storage.ning.com/topology/rest/1.0/file/get/3838310782?profile=original" target="_blank" rel="noopener"><img src="https://storage.ning.com/topology/rest/1.0/file/get/3838310782?profile=RESIZE_710x" class="align-center"/></a></span></p>
<p></p>
<p>View the full picture (with pluses and minuses) as well as related articles, <a href="https://www.datasciencecentral.com/profiles/blogs/best-languages-for-data-science-and-statistics-in-one-picture" target="_blank" rel="noopener">here</a>. </p>
<p>Below are more resources for specific languages, including comparisons between languages, and same algorithms illustrated in different languages.</p>
<ul>
<li><a href="https://www.datasciencecentral.com/page/search?q=python" target="_blank" rel="noopener">Python</a></li>
<li><a href="https://www.datasciencecentral.com/page/search?q=python+vs+R" target="_blank" rel="noopener">Python vs R</a></li>
<li><a href="https://www.datasciencecentral.com/page/search?q=R" target="_blank" rel="noopener">R</a></li>
<li><a href="https://www.datasciencecentral.com/page/search?q=sql" target="_blank" rel="noopener">SQL</a></li>
<li><a href="https://www.datasciencecentral.com/page/search?q=sas" target="_blank" rel="noopener">SAS</a></li>
<li><a href="https://www.datasciencecentral.com/page/search?q=julia" target="_blank" rel="noopener">Julia</a></li>
<li><a href="https://www.datasciencecentral.com/page/search?q=scala" target="_blank" rel="noopener">Scala</a></li>
<li><a href="https://www.datasciencecentral.com/page/search?q=java" target="_blank" rel="noopener">Java</a></li>
<li><a href="https://www.datasciencecentral.com/page/search?q=c" target="_blank" rel="noopener">C</a></li>
<li><a href="https://www.datasciencecentral.com/page/search?q=matlab" target="_blank" rel="noopener">Matlab</a></li>
</ul>
<p>To quickly learn these languages or refresh your skills, check out our <a href="https://www.datasciencecentral.com/page/search?q=cheat+sheets" target="_blank" rel="noopener">cheat sheets</a>.</p>
<p></p>Quick Primer On Graph Data Structuretag:www.analyticbridge.datasciencecentral.com,2020-01-21:2004291:BlogPost:3967312020-01-21T17:12:58.000ZVincent Granvillehttps://www.analyticbridge.datasciencecentral.com/profile/VincentGranville
<p><span>While many of the programming libraries encapsulate the inner working details of graph and other algorithms, as a data scientist it helps a lot having a reasonably good familiarity of such details. A solid understanding of the intuition behind such algorithms not only helps in appreciating the logic behind them but also helps in making conscious decisions about their applicability in real life cases. There are several graph based algorithms and most notable are the shortest path…</span></p>
<p><span>While many of the programming libraries encapsulate the inner working details of graph and other algorithms, as a data scientist it helps a lot having a reasonably good familiarity of such details. A solid understanding of the intuition behind such algorithms not only helps in appreciating the logic behind them but also helps in making conscious decisions about their applicability in real life cases. There are several graph based algorithms and most notable are the shortest path algorithms. Algorithms such as Dijkstra’s, Bellman Ford, A*, Floyd-Warshall and Johnson’s algorithms are commonly encountered. While these algorithms are discussed in many text books and informative resources online, I felt that not many provided visual examples that would otherwise illustrate the processing steps to sufficient granularity enabling easy understanding of the working details. As such, I had to use simple enough graphs to visualize the algorithmic flow for my own understanding and I wanted to share my examples along with the explanations through this article. Since there are many algorithms to illustrate, I decided to divide the article into several parts. In part 1, I have illustrated Dijkstra’s and Bellman-Ford algorithms. Before diving into algorithms, I also wanted to highlight salient points on the graph data structure.</span></p>
<p></p>
<p><a href="https://storage.ning.com/topology/rest/1.0/file/get/3829903896?profile=original" target="_blank" rel="noopener"><img src="https://storage.ning.com/topology/rest/1.0/file/get/3829903896?profile=RESIZE_710x" class="align-center"/></a></p>
<p><strong>Content of this article</strong>:</p>
<ul>
<li>Quick Primer On Graph Data Structure</li>
<li>Dijkstra’s Algorithm</li>
<li>Bellman-Ford Algorithm</li>
<li>More Algorithms To Cover</li>
</ul>
<p>Read the full article <a href="https://www.datasciencecentral.com/profiles/blogs/illustration-of-key-graph-based-shortest-path-algorithms" target="_blank" rel="noopener">here</a>. </p>
<p><em>Written by Murali Kashaboina, Tech. Executive, PhD Researcher AI/ML/DS, Data Scientist, Industry Speaker, Entrepreneur.</em></p>TensorFlow 1.x vs 2.x. – summary of changestag:www.analyticbridge.datasciencecentral.com,2020-01-09:2004291:BlogPost:3962502020-01-09T16:49:08.000ZVincent Granvillehttps://www.analyticbridge.datasciencecentral.com/profile/VincentGranville
<p>In 2019, Google announced TensorFlow 2.0, it is a major leap from the existing TensorFlow 1.0. The key differences are as follows:</p>
<p><strong>Ease of use:</strong><span> </span>Many old libraries (example tf.contrib) were removed, and some consolidated. For example, in TensorFlow1.x the model could be made using Contrib, layers, Keras or estimators, so many options for the same task confused many new users. TensorFlow 2.0 promotes TensorFlow Keras for model experimentation and Estimators…</p>
<p>In 2019, Google announced TensorFlow 2.0, it is a major leap from the existing TensorFlow 1.0. The key differences are as follows:</p>
<p><strong>Ease of use:</strong><span> </span>Many old libraries (example tf.contrib) were removed, and some consolidated. For example, in TensorFlow1.x the model could be made using Contrib, layers, Keras or estimators, so many options for the same task confused many new users. TensorFlow 2.0 promotes TensorFlow Keras for model experimentation and Estimators for scaled serving, and the two APIs are very convenient to use.</p>
<p><strong>Eager Execution</strong>: In TensorFlow 1.x. The writing of code was divided into two parts: building the computational graph and later creating a session to execute it. this was quite cumbersome, especially if in the big model that you have designed, a small error existed somewhere in the beginning. TensorFlow2.0 Eager Execution is implemented by default, i.e. you no longer need to create a session to run the computational graph, you can see the result of your code directly without the need of creating Session.</p>
<p><a href="https://storage.ning.com/topology/rest/1.0/file/get/3810094427?profile=original" target="_blank" rel="noopener"><img src="https://storage.ning.com/topology/rest/1.0/file/get/3810094427?profile=RESIZE_710x" class="align-center"/></a></p>
<p><strong>Model Building and deploying made easy:</strong> With TensorFlow2.0 providing high level TensorFlow Keras API, the user has a greater flexibility in creating the model. One can define model using Keras functional or sequential API. The TensorFlow Estimator API allows one to run model on a local host or on a distributed multi-server environment without changing your model. Computational graphs are powerful in terms of performance, in TensorFlow 2.0 you can use the decorator<span> </span><strong>tf.function</strong><span> </span>so that the following function block is run as a single graph. This is done via the powerful Autograph feature of TensorFlow 2.0. This allows users to optimize the function and increase portability. And the best part you can write the function using natural Python syntax.</p>
<p><em>Read the full article <a href="https://www.datasciencecentral.com/profiles/blogs/tensorflow-1-x-vs-2-x-summary-of-changes" target="_blank" rel="noopener">here</a>. To access the author's books covering machine learning, Azure, Tensorflow, deep learning and related topics (free for DSC members), <a href="https://www.datasciencecentral.com/profiles/blogs/new-books-and-resources-for-dsc-members" target="_blank" rel="noopener">follow this link</a>. </em></p>The Next Big Thing in AI/ML is…tag:www.analyticbridge.datasciencecentral.com,2020-01-07:2004291:BlogPost:3961592020-01-07T14:41:30.000ZVincent Granvillehttps://www.analyticbridge.datasciencecentral.com/profile/VincentGranville
<p><strong><em>Summary:</em></strong><em> AI/ML itself is the next big thing for many fields if you’re on the outside looking in. But if you’re a data scientist it’s possible to see those advancements that will propel AI/ML to its next phase of utility.</em></p>
<p> </p>
<p><a href="https://storage.ning.com/topology/rest/1.0/file/get/3673028562?profile=original" rel="noopener" target="_blank"><img class="align-right" src="https://storage.ning.com/topology/rest/1.0/file/get/3673028562?profile=RESIZE_710x" width="350"></img></a> “The Next Big Thing in AI/ML is…” as the lead to an article is probably the most…</p>
<p><strong><em>Summary:</em></strong><em> AI/ML itself is the next big thing for many fields if you’re on the outside looking in. But if you’re a data scientist it’s possible to see those advancements that will propel AI/ML to its next phase of utility.</em></p>
<p> </p>
<p><a href="https://storage.ning.com/topology/rest/1.0/file/get/3673028562?profile=original" target="_blank" rel="noopener"><img src="https://storage.ning.com/topology/rest/1.0/file/get/3673028562?profile=RESIZE_710x" width="350" class="align-right"/></a>“The Next Big Thing in AI/ML is…” as the lead to an article is probably the most overused trope since “once upon a time”. Seriously, just how many ‘next big things’ can there be? Is your incredulity not stretched every time you read that?</p>
<p>It’s tempting to say that writers starting an article in this way should be flogged …except that yours truly did recently start one with “<a href="https://www.datasciencecentral.com/profiles/blogs/causality-the-next-most-important-thing-in-ai-ml"><em><u>the next most IMPORTANT thing in AI/ML</u></em></a>…” Well that’s clearly different isn’t it – almost.</p>
<p>If you label something ‘next big thing’ it’s evident you have a strong opinion – or your marketing department has no imagination. </p>
<p>First of all, if you’re on the outside of AI/ML looking in, AI/ML clearly is the next big thing. Most next-big-thing articles are actually in this category, explaining how AI/ML can enhance everything from your dating life to your investment portfolio.</p>
<p>But if you’re fortunate enough to be on the inside as our readers are then you know that the future of AI/ML is developing along many different paths and some of those should be more important than others. Some are technical, some are applications, and some are even social or philosophical. So how to tell what the next big thing is or at least what the rankings should be.</p>
<p><em>Read the full article <a href="https://www.datasciencecentral.com/profiles/blogs/the-next-big-thing-in-ai-ml-is" target="_blank" rel="noopener">here</a>. For more recent articles about AI, <a href="https://www.datasciencecentral.com/page/search?q=ai" target="_blank" rel="noopener">follow this link</a>. </em></p>How exactly do you determine causation?tag:www.analyticbridge.datasciencecentral.com,2019-12-17:2004291:BlogPost:3958132019-12-17T21:30:00.000ZVincent Granvillehttps://www.analyticbridge.datasciencecentral.com/profile/VincentGranville
<p><em>Another good article by Ajit Joakar. </em></p>
<p><strong><em>Co-relation does not equal causation</em></strong><span> </span>– is a mantra drilled into a Data Scientist from an early age</p>
<p>That’s fine. But very few talk of the follow-on question ..</p>
<p><strong><em>How exactly do you determine causation?</em></strong></p>
<p>This problem is further compounded because most books and examples are based on standard datasets (ex: Boston, Iris etc) . These examples do not discuss…</p>
<p><em>Another good article by Ajit Joakar. </em></p>
<p><strong><em>Co-relation does not equal causation</em></strong><span> </span>– is a mantra drilled into a Data Scientist from an early age</p>
<p>That’s fine. But very few talk of the follow-on question ..</p>
<p><strong><em>How exactly do you determine causation?</em></strong></p>
<p>This problem is further compounded because most books and examples are based on standard datasets (ex: Boston, Iris etc) . These examples do not discuss causation because the features chosen are already determined to be causal (ex the factors affecting house prices are chosen to be causal.) So, if we start from the beginning (without simplified examples) how do you know if a particular variable is a causal variable?</p>
<p><a href="https://storage.ning.com/topology/rest/1.0/file/get/3774867428?profile=original" target="_blank" rel="noopener"><img src="https://storage.ning.com/topology/rest/1.0/file/get/3774867428?profile=RESIZE_710x" class="align-center"/></a></p>
<p>Firstly, causality cannot be determined from data alone. Data gives co-relation, but data alone cannot determine causation. To determine causation, we need to perform an<span> </span><strong>experiment or a controlled study</strong>.</p>
<p>Read the full article <a href="https://www.datasciencecentral.com/profiles/blogs/correlation-does-not-equal-causation-but-how-exactly-do-you/" target="_blank" rel="noopener">here</a>. For other articles on this topic, <a href="https://www.datasciencecentral.com/page/search?q=causation" target="_blank" rel="noopener">follow this link</a>. Other relevant articles include:</p>
<ul>
<li><a href="https://www.datasciencecentral.com/profiles/blogs/how-to-lie-with-p-values" target="_blank" rel="noopener">How to Lie with P-values</a></li>
<li><a href="https://www.datasciencecentral.com/profiles/blogs/six-degrees-of-separation-between-any-two-data-sets" target="_blank" rel="noopener">Six Degrees of Separation Between Any Two Data Sets</a></li>
<li><a href="https://www.analyticbridge.datasciencecentral.com/profiles/blogs/the-curse-of-big-data" target="_blank" rel="noopener">The curse of Big Data</a></li>
<li>Chapter 27 (about strong correlation) <a href="https://www.datasciencecentral.com/profiles/blogs/free-book-statistics-new-foundations-toolbox-and-machine-learning" target="_blank" rel="noopener">in this book</a></li>
<li>Pages 165-166 <a href="https://www.datasciencecentral.com/profiles/blogs/my-data-science-book" target="_blank" rel="noopener">in this book</a></li>
</ul>
<p></p>Rule of thumb: Which AI / ML algorithms to applytag:www.analyticbridge.datasciencecentral.com,2019-12-17:2004291:BlogPost:3958102019-12-17T16:00:00.000ZVincent Granvillehttps://www.analyticbridge.datasciencecentral.com/profile/VincentGranville
<p><em>Written by Ajit Jaokar.</em></p>
<p>Firstly, there are three broad categories of algorithms:</p>
<ul>
<li><strong>Supervised learning:</strong><span> </span>You know how to classify the input data and the type of behavior you want to predict, but you need the algorithm to calculate it for you on new data</li>
<li><strong>Unsupervised learning:</strong><span> </span>You do not know how to classify the data, and you want the algorithm to find patterns and classify the data for…</li>
</ul>
<p><em>Written by Ajit Jaokar.</em></p>
<p>Firstly, there are three broad categories of algorithms:</p>
<ul>
<li><strong>Supervised learning:</strong><span> </span>You know how to classify the input data and the type of behavior you want to predict, but you need the algorithm to calculate it for you on new data</li>
<li><strong>Unsupervised learning:</strong><span> </span>You do not know how to classify the data, and you want the algorithm to find patterns and classify the data for you</li>
<li><strong>Reinforcement learning:</strong><span> </span>An algorithm which learns by trial and error by interacting with the environment. You use it when you don’t have a lot of training data; you cannot clearly define the ideal end state; or the only way to learn about the environment is to interact with it</li>
</ul>
<p><a href="https://storage.ning.com/topology/rest/1.0/file/get/3774522423?profile=original" target="_blank" rel="noopener"><img src="https://storage.ning.com/topology/rest/1.0/file/get/3774522423?profile=RESIZE_710x" class="align-center"/></a></p>
<p>So, let us consider which algorithms can apply to business problems.</p>
<p><strong>1. Customer services and supply chain</strong></p>
<ul>
<li>Understand product-sales drivers such as competition prices, distribution, advertisement, etc<span> </span><strong>linear regression</strong></li>
<li>Optimize price points and estimate product-price elasticities<span> </span><strong>linear regression</strong></li>
<li>Classify customers based on how likely they are to repay a loan<span> </span><strong>logistic regression</strong></li>
<li>Predict client churn<span> </span><strong>Linear/quadratic discriminant analysis</strong></li>
<li>Predict a sales lead’s likelihood of closing<span> </span><strong>Linear/quadratic discriminant analysis</strong></li>
<li>Detect a company logo in social media to better understand joint marketing opportunities (eg, pairing of brands in one product):<span> </span><strong>Convolutional neural networks</strong></li>
<li>Understand customer brand perception and usage through images :<span> </span><strong>Convolutional neural networks</strong></li>
</ul>
<p><em>To read the full article featuring other applications, including in healthcare and trading, <a href="https://www.datasciencecentral.com/profiles/blogs/rule-of-thumb-which-ai-ml-algorithms-to-apply-to-business-1" target="_blank" rel="noopener">follow this link</a>. For other articles by Ajit Joakar, <a href="https://www.datasciencecentral.com/profiles/blog/list?user=32ac9fc41n4f4" target="_blank" rel="noopener">visit this webpage</a>. Details about these algorithms can be found <a href="https://www.datasciencecentral.com/page/search?q=algorithm" target="_blank" rel="noopener">here</a>. </em></p>Statistics for Data Science in One Picturetag:www.analyticbridge.datasciencecentral.com,2019-12-13:2004291:BlogPost:3957052019-12-13T01:30:00.000ZVincent Granvillehttps://www.analyticbridge.datasciencecentral.com/profile/VincentGranville
<p>There's no doubt about it, probability and statistics is an enormous field, encompassing topics from the familiar (like the average) to the complex (regression analysis, correlation coefficients and hypothesis testing to name but a few). If you want to be a great data scientist, you have to know some basic statistics. The following picture shows which statistics topics you must know if you're going to excel in data science.…</p>
<p></p>
<p>There's no doubt about it, probability and statistics is an enormous field, encompassing topics from the familiar (like the average) to the complex (regression analysis, correlation coefficients and hypothesis testing to name but a few). If you want to be a great data scientist, you have to know some basic statistics. The following picture shows which statistics topics you must know if you're going to excel in data science.</p>
<p><a href="https://storage.ning.com/topology/rest/1.0/file/get/3767565869?profile=original" target="_blank" rel="noopener"><img src="https://storage.ning.com/topology/rest/1.0/file/get/3767565869?profile=RESIZE_710x" class="align-center"/></a></p>
<p>Read the full article <a href="https://www.datasciencecentral.com/profiles/blogs/statistics-for-data-science-in-one-picture" target="_blank" rel="noopener">here</a>. For more concepts explained in one picture, follow <a href="https://www.datasciencecentral.com/page/search?q=in+one+picture" target="_blank" rel="noopener">this link</a>. For articles about statistical and machine learning concepts explained in simple English, from the same author, follow <a href="https://www.datasciencecentral.com/page/search?q=in+simple+english" target="_blank" rel="noopener">this link</a>. Or to download a book featuring many of these resources, click <a href="https://www.datasciencecentral.com/profiles/blogs/online-encyclopedia-of-statistical-science-free-1" target="_blank" rel="noopener">here</a> (free, but available to DSC members exclusively.)</p>
<p><strong>From our Sponsors</strong></p>
<ul>
<li><a href="https://dsc.news/34h27EX" target="_blank" rel="noopener">Future-proof your path to Enterprise AI</a> - Dataiku 6 Webinar Recording</li>
</ul>
<p></p>On Being a 50 Year Old Data Scientisttag:www.analyticbridge.datasciencecentral.com,2019-12-10:2004291:BlogPost:3955862019-12-10T18:51:04.000ZVincent Granvillehttps://www.analyticbridge.datasciencecentral.com/profile/VincentGranville
<p>At the time of writing, I'm a 52 year-old working in the fields of mathematics and data science. In mathematics, that makes me well-seasoned (and probably well-tenured, if I had chosen to continue in academia). In data science, some would consider me a dinosaur. In fact, many older people considering a career in data science might be put off by the thought that data science is tough to break into at a later age. But is that statement true? Should the over 50 crowd put down their textbooks…</p>
<p>At the time of writing, I'm a 52 year-old working in the fields of mathematics and data science. In mathematics, that makes me well-seasoned (and probably well-tenured, if I had chosen to continue in academia). In data science, some would consider me a dinosaur. In fact, many older people considering a career in data science might be put off by the thought that data science is tough to break into at a later age. But is that statement true? Should the over 50 crowd put down their textbooks and pick up their gardening tools?</p>
<p><a href="https://storage.ning.com/topology/rest/1.0/file/get/3764064994?profile=original" target="_blank" rel="noopener"><img src="https://storage.ning.com/topology/rest/1.0/file/get/3764064994?profile=RESIZE_710x" class="align-center"/></a></p>
<p><strong>Is Math a Young Person's Game? Maybe</strong></p>
<p>As far as the mathematics portion of my career, I didn't become a mathematician until I was in my mid-thirties. Before that I dabbled with whatever venture brought in a few bob to feed the kids: computer operator, Ebay entrepreneur, aviation electrician. I was 36 when I decided to go back to school to get my master's. If Alfred Adler<span> </span>is to be believed, my "mathematical life" had already long passed by the time I graduated.</p>
<p>Work rarely improves after the age of twenty-five or thirty. If little has been accomplished by then, little will ever be accomplished. </p>
<p>Read the full article by Stephanie Glen, <a href="https://www.datasciencecentral.com/profiles/blogs/on-being-a-50-year-old-data-scientist" target="_blank" rel="noopener">here</a>. For other articles by Stephanie Glen, <a href="https://www.datasciencecentral.com/profiles/blog/list?user=0lahn4b4odglr" target="_blank" rel="noopener">follow this link</a>. </p>
<p><strong>Sponsored Announcement</strong></p>
<ul>
<li><span>Be Indispensable With a Master’s in Data Analytics. As technology and the marketplace change constantly, you want the skills to thrive. The UCLA Anderson Master of Science in Business Analytics is a 13-month program that will give you the tools to become a leader in this rapidly evolving field. Read more <a href="https://dsc.news/2KTpz3V" target="_blank" rel="noopener">here</a>. </span></li>
</ul>Variance, Attractors and Behavior of Chaotic Statistical Systemstag:www.analyticbridge.datasciencecentral.com,2019-11-29:2004291:BlogPost:3957632019-11-29T09:30:00.000ZVincent Granvillehttps://www.analyticbridge.datasciencecentral.com/profile/VincentGranville
<p><span>We study the properties of a typical chaotic system to derive general insights that apply to a large class of unusual statistical distributions. The purpose is to create a unified theory of these systems. These systems can be deterministic or random, yet due to their gentle chaotic nature, they exhibit the same behavior in both cases. They lead to new models with numerous applications in Fintech, cryptography, simulation and benchmarking tests of statistical hypotheses. They are also…</span></p>
<p><span>We study the properties of a typical chaotic system to derive general insights that apply to a large class of unusual statistical distributions. The purpose is to create a unified theory of these systems. These systems can be deterministic or random, yet due to their gentle chaotic nature, they exhibit the same behavior in both cases. They lead to new models with numerous applications in Fintech, cryptography, simulation and benchmarking tests of statistical hypotheses. They are also related to numeration systems. One of the highlights in this article is the discovery of a simple variance formula for an infinite sum of highly correlated random variables. We also try to find and characterize attractor distributions: these are the limiting distributions for the systems in question, just like the Gaussian attractor is the universal attractor with finite variance in the central limit theorem framework. Each of these systems is governed by a specific functional equation, typically a stochastic integral equation whose solutions are the attractors. This equation helps establish many of their properties. The material discussed here is state-of-the-art and original, yet presented in a format accessible to professionals with limited exposure to statistical science. Physicists, statisticians, data scientists and people interested in signal processing, chaos modeling, or dynamical systems will find this article particularly interesting. Connection to other similar chaotic systems is also discussed. </span></p>
<p>Read the full article <a href="https://www.datasciencecentral.com/profiles/blogs/chaos-attractors-in-machine-learning-systems" target="_blank" rel="noopener">here</a>. </p>
<p><span><a href="https://storage.ning.com/topology/rest/1.0/file/get/3746624910?profile=original" target="_blank" rel="noopener"><img src="https://storage.ning.com/topology/rest/1.0/file/get/3746624910?profile=RESIZE_710x" class="align-center"/></a></span></p>
<p><span><strong>Content of this article</strong></span></p>
<p>1. The Geometric System: Definition and Properties</p>
<ul>
<li>A test for independence</li>
<li>Connection to the Fixed-Point Theorem</li>
</ul>
<p>2. Geometric and Uniform Attractors</p>
<ul>
<li>General formula</li>
<li>The geometric attractor</li>
<li>Not any distribution can be an attractor</li>
<li>The uniform attractor</li>
</ul>
<p>3. Discrete <em>X</em> Resulting in a Gaussian-looking Attractor</p>
<ul>
<li>Towards a numerical solution</li>
</ul>
<p>4. Special Cases with Continuous Distribution for <em>X</em></p>
<ul>
<li>An almost perfect equality</li>
<li>Is the log-normal distribution an attractor?</li>
</ul>
<p>5. Connection to Binary Digits and Singular Distributions</p>
<ul>
<li>Numbers made up of random digits</li>
<li>Singular distributions</li>
<li>Connection to Infinite Random Products</li>
</ul>
<p>6. A General Classification of Chaotic Statistical Distributions</p>
<p><em>Read the full article <a href="https://www.datasciencecentral.com/profiles/blogs/chaos-attractors-in-machine-learning-systems" target="_blank" rel="noopener">here</a>. </em></p>A Lesson in Using NLP for Hidden Feature Extractiontag:www.analyticbridge.datasciencecentral.com,2019-11-29:2004291:BlogPost:3956562019-11-29T05:00:00.000ZVincent Granvillehttps://www.analyticbridge.datasciencecentral.com/profile/VincentGranville
<p><strong><em>Summary:</em></strong><em> 99% of our application of NLP has to do with chatbots or translation. This is a very interesting story about expanding the bounds of NLP and feature creation to predict bestselling novels. The authors created over 20,000 NLP features, about 2,700 of which proved to be predictive with a 90% accuracy rate in predicting NYT bestsellers.…</em></p>
<p></p>
<p><strong><em>Summary:</em></strong><em> 99% of our application of NLP has to do with chatbots or translation. This is a very interesting story about expanding the bounds of NLP and feature creation to predict bestselling novels. The authors created over 20,000 NLP features, about 2,700 of which proved to be predictive with a 90% accuracy rate in predicting NYT bestsellers.</em></p>
<p><a href="https://storage.ning.com/topology/rest/1.0/file/get/3515945869?profile=original" target="_blank" rel="noopener"><img src="https://storage.ning.com/topology/rest/1.0/file/get/3515945869?profile=RESIZE_710x" width="300" class="align-right"/></a>It’s a pretty rare individual who hasn’t had a personal experience with NLP (Natural Language Processing). About 99% of those experiences are in the form of chatbots or translators, either text or speech in, and text or speech out.</p>
<p>This has proved to be one of the hottest and most economically valuable applications of deep learning but it’s not the whole story.</p>
<p>I recently picked up a copy of a 2016 book entitled<span> </span><em>“The Bestseller Code – Anatomy of the Blockbuster Novel”</em><span> </span>which promised a story about using NLP and machine learning to predict which US fiction novels would make the New York Times Best Sellers list and which would not.</p>
<p>There are about 55,000 new works of fiction published each year (and that doesn’t count self-published). Less than 0.5% or about 200 to 220 make the NYT Bestseller list in a year. Only 3 or 4 of those will sell more than a million copies.</p>
<p>The authors, Jodie Archer (background in publishing), and Matt Jockers (cofounder of the Stanford Literary Lab) write about their model which has an astounding 90% success rate in predicting which books will make the NYT list using a corpus of 5,000 novels from the last 30 years which included 500 NYT Bestsellers.</p>
<p>The book, which I heartily recommend, is not a data science book, nor is it a how-to-write-a-bestseller. And while it has elements of both it’s mostly reporting about the most interesting finds among the 20,000 extracted features they developed, about 2,800 of which proved to be predictive. More on that later.</p>
<p>What struck me was the potential this field of ‘stylometrics’ has for extracting hidden features for almost any problem which has a large amount of text as one of its data sources. Could be CSR logs of customer interaction, could be doctor’s notes, blogs, or warranty repair descriptions where we’re really only scratching the surface with word clouds and sentiment analysis.</p>
<p></p>
<p><em>Read full article <a href="https://www.datasciencecentral.com/profiles/blogs/nlp-picks-bestsellers-a-lesson-in-using-nlp-for-hidden-feature-ex" target="_blank" rel="noopener">here</a>.</em></p>New Family of Generalized Gaussian Distributionstag:www.analyticbridge.datasciencecentral.com,2019-11-28:2004291:BlogPost:3957602019-11-28T06:14:46.000ZVincent Granvillehttps://www.analyticbridge.datasciencecentral.com/profile/VincentGranville
<p><span>In this article, we explore a new type of generalized univariate normal distributions that satisfies useful statistical properties, with interesting applications. This new class of distributions is defined by its characteristic function, and applications are discussed in the last section. These distributions are semi-stable (we define what this means below). In short it is a much wider class than the stable distributions</span><span> (the only stable distribution with a finite variance…</span></p>
<p><span>In this article, we explore a new type of generalized univariate normal distributions that satisfies useful statistical properties, with interesting applications. This new class of distributions is defined by its characteristic function, and applications are discussed in the last section. These distributions are semi-stable (we define what this means below). In short it is a much wider class than the stable distributions</span><span> (the only stable distribution with a finite variance being the Gaussian one) and it encompasses all stable distributions as a subset. It is a sub-class of the divisible distributions. </span></p>
<p><span><a href="https://storage.ning.com/topology/rest/1.0/file/get/3744926698?profile=original" target="_blank" rel="noopener"><img src="https://storage.ning.com/topology/rest/1.0/file/get/3744926698?profile=RESIZE_710x" class="align-center"/></a></span></p>
<p><strong>Content of this article:</strong></p>
<ul>
<li>New two-parameter distribution <em>G</em>(<em>a</em>, <em>b</em>): introduction, properties</li>
<li>Generalized central limit theorem</li>
<li>Characteristic function</li>
<li>Density: special cases, moments, mathematical conjecture</li>
<li>Simulations</li>
<li>Weakly semi-stable distributions</li>
<li>Counter-example</li>
<li>Applications and conclusions</li>
</ul>
<p><em>Read the full article <a href="https://www.datasciencecentral.com/profiles/blogs/new-family-of-generalized-gaussian-distributions" target="_blank" rel="noopener">here</a>. </em></p>
<p></p>10 Machine Learning Methods that Every Data Scientist Should Knowtag:www.analyticbridge.datasciencecentral.com,2019-11-27:2004291:BlogPost:3957572019-11-27T17:58:33.000ZVincent Granvillehttps://www.analyticbridge.datasciencecentral.com/profile/VincentGranville
<p class="nj nk eo ao nl b nm nn no np nq nr ns nt nu nv nw" id="a572">Machine learning is a hot topic in research and industry, with new methodologies developed all the time. The speed and complexity of the field makes keeping up with new techniques difficult even for experts — and potentially overwhelming for beginners.</p>
<p class="nj nk eo ao nl b nm nn no np nq nr ns nt nu nv nw" id="0d4d">To demystify machine learning and to offer a learning path for those who are new to the core…</p>
<p id="a572" class="nj nk eo ao nl b nm nn no np nq nr ns nt nu nv nw">Machine learning is a hot topic in research and industry, with new methodologies developed all the time. The speed and complexity of the field makes keeping up with new techniques difficult even for experts — and potentially overwhelming for beginners.</p>
<p id="0d4d" class="nj nk eo ao nl b nm nn no np nq nr ns nt nu nv nw">To demystify machine learning and to offer a learning path for those who are new to the core concepts, let’s look at ten different methods, including simple descriptions, visualizations, and examples for each one.</p>
<p id="64a5" class="nj nk eo ao nl b nm nn no np nq nr ns nt nu nv nw">A machine learning algorithm, also called model, is a mathematical expression that represents data in the context of a problem, often a business problem. The aim is to go from data to insight. For example, if an online retailer wants to anticipate sales for the next quarter, they might use a machine learning algorithm that predicts those sales based on past sales and other relevant data. Similarly, a windmill manufacturer might visually monitor important equipment and feed the video data through algorithms trained to identify dangerous cracks.</p>
<p class="nj nk eo ao nl b nm nn no np nq nr ns nt nu nv nw"><a href="https://storage.ning.com/topology/rest/1.0/file/get/3744174486?profile=original" target="_blank" rel="noopener"><img src="https://storage.ning.com/topology/rest/1.0/file/get/3744174486?profile=RESIZE_710x" class="align-center"/></a></p>
<p id="00c2" class="nj nk eo ao nl b nm nn no np nq nr ns nt nu nv nw">The ten methods described offer an overview — and a foundation you can build on as you hone your machine learning knowledge and skill:</p>
<ol class="">
<li id="b886" class="nj nk eo ao nl b nm nn no np nq nr ns nt nu nv nw nx ny nz">Regression</li>
<li id="2763" class="nj nk eo ao nl b nm ob no oc nq od ns oe nu of nw nx ny nz">Classification</li>
<li id="54dd" class="nj nk eo ao nl b nm ob no oc nq od ns oe nu of nw nx ny nz">Clustering</li>
<li id="c007" class="nj nk eo ao nl b nm ob no oc nq od ns oe nu of nw nx ny nz">Dimensionality Reduction</li>
<li id="1af1" class="nj nk eo ao nl b nm ob no oc nq od ns oe nu of nw nx ny nz">Ensemble Methods</li>
</ol>
<p><em>Read the rest of the list, with description for all the 10 algorithms, <a href="https://www.datasciencecentral.com/profiles/blogs/10-machine-learning-methods-that-every-data-scientist-should-know" target="_blank" rel="noopener">here</a>. </em></p>10 Visualizations Every Data Scientist Should Knowtag:www.analyticbridge.datasciencecentral.com,2019-11-12:2004291:BlogPost:3954782019-11-12T17:00:00.000ZVincent Granvillehttps://www.analyticbridge.datasciencecentral.com/profile/VincentGranville
<p><em>This article is by Jorge Castañón, Ph.D., Senior Data Scientist at the IBM Machine Learning Hub.</em></p>
<p class="ni nj en ao nk b nl nm nn no np nq nr ns nt nu nv" id="5920">Data visualization plays two key roles:</p>
<p class="ni nj en ao nk b nl nm nn no np nq nr ns nt nu nv" id="085d">1.<span> </span><em class="op">Communicating results clearly to a general audience.</em></p>
<p class="ni nj en ao nk b nl nm nn no np nq nr ns nt nu nv" id="c440">2.<span> …</span></p>
<p><em>This article is by Jorge Castañón, Ph.D., Senior Data Scientist at the IBM Machine Learning Hub.</em></p>
<p id="5920" class="ni nj en ao nk b nl nm nn no np nq nr ns nt nu nv">Data visualization plays two key roles:</p>
<p id="085d" class="ni nj en ao nk b nl nm nn no np nq nr ns nt nu nv">1.<span> </span><em class="op">Communicating results clearly to a general audience.</em></p>
<p id="c440" class="ni nj en ao nk b nl nm nn no np nq nr ns nt nu nv">2.<span> </span><em class="op">Organizing a view of data that suggests a new hypothesis or a next step in a project.</em></p>
<p id="f14e" class="ni nj en ao nk b nl nm nn no np nq nr ns nt nu nv">It’s no surprise that most people prefer visuals to large tables of numbers. That’s why clearly labeled plots with meaningful interpretation always make it to the front of academic papers.</p>
<p class="ni nj en ao nk b nl nm nn no np nq nr ns nt nu nv"><a href="https://storage.ning.com/topology/rest/1.0/file/get/3709852824?profile=original" target="_blank" rel="noopener"><img src="https://storage.ning.com/topology/rest/1.0/file/get/3709852824?profile=RESIZE_710x" class="align-center"/></a></p>
<p id="6028" class="ni nj en ao nk b nl nm nn no np nq nr ns nt nu nv">This post looks at the 10 visualizations you can bring to bear on your data — whether you want to convince the wider world of your theories or crack open your own project and take the next step:</p>
<ol class="">
<li id="53c6" class="ni nj en ao nk b nl nm nn no np nq nr ns nt nu nv oq or os">Histograms</li>
<li id="ddc7" class="ni nj en ao nk b nl ot nn ou np ov nr ow nt ox nv oq or os">Bar/Pie charts</li>
<li id="6fcc" class="ni nj en ao nk b nl ot nn ou np ov nr ow nt ox nv oq or os">Scatter/Line plots</li>
<li id="3613" class="ni nj en ao nk b nl ot nn ou np ov nr ow nt ox nv oq or os">Time series</li>
<li id="6263" class="ni nj en ao nk b nl ot nn ou np ov nr ow nt ox nv oq or os">Relationship maps</li>
<li id="c7df" class="ni nj en ao nk b nl ot nn ou np ov nr ow nt ox nv oq or os">Heat maps</li>
<li id="d07c" class="ni nj en ao nk b nl ot nn ou np ov nr ow nt ox nv oq or os">Geo Maps</li>
<li id="8f76" class="ni nj en ao nk b nl ot nn ou np ov nr ow nt ox nv oq or os">3-D Plots</li>
<li id="3965" class="ni nj en ao nk b nl ot nn ou np ov nr ow nt ox nv oq or os">Higher-Dimensional Plots</li>
<li id="ec17" class="ni nj en ao nk b nl ot nn ou np ov nr ow nt ox nv oq or os">Word clouds</li>
</ol>
<p>Read the full article, with descriptions and illustrations for these visualizations, <a href="https://www.datasciencecentral.com/profiles/blogs/10-visualizations-every-data-scientist-should-know" target="_blank" rel="noopener">here</a>.</p>More Weird Statistical Distributionstag:www.analyticbridge.datasciencecentral.com,2019-10-27:2004291:BlogPost:3951392019-10-27T00:00:00.000ZVincent Granvillehttps://www.analyticbridge.datasciencecentral.com/profile/VincentGranville
<p>Some original and very interesting material is presented here, with possible applications in Fintech. No need for a PhD in math to understand this article: I tried to make the presentation as simple as possible, focusing on high-level results rather than technicalities. Yet, professional statisticians and mathematicians, even academic researchers, will find some deep and fascinating results worth further exploring.…</p>
<p></p>
<p>Some original and very interesting material is presented here, with possible applications in Fintech. No need for a PhD in math to understand this article: I tried to make the presentation as simple as possible, focusing on high-level results rather than technicalities. Yet, professional statisticians and mathematicians, even academic researchers, will find some deep and fascinating results worth further exploring.</p>
<p><a href="https://storage.ning.com/topology/rest/1.0/file/get/3681849077?profile=original" target="_blank" rel="noopener"><img src="https://storage.ning.com/topology/rest/1.0/file/get/3681849077?profile=RESIZE_710x" class="align-center"/></a></p>
<p style="text-align: center;"><em>Can you identify patterns in this chart? (see section 2.2. in the article for an answer)</em></p>
<p>Let's start with </p>
<p><a href="https://storage.ning.com/topology/rest/1.0/file/get/3681308901?profile=original" target="_blank" rel="noopener"><img src="https://storage.ning.com/topology/rest/1.0/file/get/3681308901?profile=RESIZE_710x" class="align-center"/></a></p>
<p>Here the<span> </span><em>X</em>(<em>k</em>)'s are random variable identically and independently distributed, commonly referred to as <em>X</em>. We are trying to find the distribution of<span> </span><em>Z</em>.</p>
<p><strong>Contents</strong></p>
<p>1. Using a Simple Discrete Distribution for <em>X</em></p>
<p>2. Towards a Better Model</p>
<ul>
<li>Approximate Solution</li>
<li>The Fractal, Brownian-like Error Term</li>
</ul>
<p>3. Finding <em>X</em> and <em>Z</em> Using Characteristic Functions</p>
<ul>
<li>Test with Log-normal Distribution for <em>X</em></li>
<li>Playing with the Characteristic Functions</li>
<li>Generalization to Continued Fractions and Nested Cubic Roots</li>
</ul>
<p>4. Exercises</p>
<p><em>Read this article <a href="https://www.datasciencecentral.com/profiles/blogs/math-fun-infinite-nested-radicals-of-random-variables" target="_blank" rel="noopener">here</a>. </em></p>
<p></p>Complete Hands-Off Automated Machine Learningtag:www.analyticbridge.datasciencecentral.com,2019-10-22:2004291:BlogPost:3948882019-10-22T20:30:00.000ZVincent Granvillehttps://www.analyticbridge.datasciencecentral.com/profile/VincentGranville
<p>By Bill Vorhies. </p>
<p><strong><em>Summary:</em></strong><em> Here’ a proposal for real ‘zero touch’, ‘set-em-and-forget-em’ machine learning from the researchers at Amazon. If you have an environment as fast changing as e-retail and a huge number of models matching buyers and products you could achieve real cost savings and revenue increases by making the refresh cycle faster and more accurate with automation. This capability likely will be coming soon to your favorite AML…</em></p>
<p>By Bill Vorhies. </p>
<p><strong><em>Summary:</em></strong><em> Here’ a proposal for real ‘zero touch’, ‘set-em-and-forget-em’ machine learning from the researchers at Amazon. If you have an environment as fast changing as e-retail and a huge number of models matching buyers and products you could achieve real cost savings and revenue increases by making the refresh cycle faster and more accurate with automation. This capability likely will be coming soon to your favorite AML platform.</em></p>
<p><em><a href="https://storage.ning.com/topology/rest/1.0/file/get/3674974988?profile=original" target="_blank" rel="noopener"><img src="https://storage.ning.com/topology/rest/1.0/file/get/3674974988?profile=RESIZE_710x" class="align-center"/></a></em></p>
<p>Is there a future in which we can really ‘set-em-and-forget-em’ machine learning? So far Automated Machine Learning (AML) is delivering on vastly simplifying the creation of models but the maintenance, refresh, and update still require manual intervention.</p>
<p>Not that we’re trying to talk ourselves out of a job. But after all, once the model is built and implemented it’s more fun to move on to the next opportunity. If the maintenance and refresh cycle could be truly automated that would be a good thing.</p>
<p>Much of the effort so far has been put into simplifying getting the model out of its AML environment and into its production environment. Facebook’s FBLearner is an example of this. A number of platforms claim to ease this process for the rest of us. At least once we manually refresh the model it’s easier to update it in production.</p>
<p><em>Read full article <a href="https://www.datasciencecentral.com/profiles/blogs/complete-hands-off-automated-machine-learning" target="_blank" rel="noopener">here</a>. </em></p>40+ Modern Tutorials Covering All Aspects of Machine Learningtag:www.analyticbridge.datasciencecentral.com,2019-10-13:2004291:BlogPost:3947202019-10-13T17:00:00.000ZVincent Granvillehttps://www.analyticbridge.datasciencecentral.com/profile/VincentGranville
<p><span>This list of lists contains books, notebooks, presentations, cheat sheets, and tutorials covering all aspects of data science, machine learning, deep learning, statistics, math, and more, with most documents featuring Python or R code and numerous illustrations or case studies. All this material is available for free, and consists of content mostly created in 2019 and 2018, by various top experts in their respective fields. A few of these documents are available on LinkedIn: see last…</span></p>
<p><span>This list of lists contains books, notebooks, presentations, cheat sheets, and tutorials covering all aspects of data science, machine learning, deep learning, statistics, math, and more, with most documents featuring Python or R code and numerous illustrations or case studies. All this material is available for free, and consists of content mostly created in 2019 and 2018, by various top experts in their respective fields. A few of these documents are available on LinkedIn: see last section on how to download them. </span></p>
<p><span><a href="https://storage.ning.com/topology/rest/1.0/file/get/3660371847?profile=original" target="_blank" rel="noopener"><img src="https://storage.ning.com/topology/rest/1.0/file/get/3660371847?profile=RESIZE_710x" class="align-center"/></a></span></p>
<p><span>Below are the first two sections.</span></p>
<p><strong>General References</strong></p>
<ul>
<li>Free Deep Learning Book (639 pages) by Prof. Gilles Louppe</li>
<li>Python Crash Course (562 pages) by Eric Matthes</li>
<li>Free Book: Applied Data Science (141 pages) - Columbia University</li>
<li>Data Science in Practice</li>
<li>Machine Learning 101 - By Jason Mayes, Google</li>
<li>The Ultimate guide to AI, Data Science & Machine Learning</li>
<li>Free Handbooks for Data Science Professionals</li>
<li>Free Book: Natural Language Processing with Python</li>
<li>Data Visualization Resources</li>
<li>Textbook: Probability Course - Harvard University</li>
<li>Textbook: The Math of Machine Learning - Berkeley University</li>
<li>Comprehensive Guide to Machine Learning - Berkeley University</li>
<li>Free Book: Foundations of Data Science - by Microsoft Research</li>
<li>Comprehensive Guide on Machine Learning - by J.P. Morgan</li>
<li>Gentle Approach to Linear Algebra - by Vincent Granville</li>
</ul>
<p><strong>Data Science Central Books, Booklets and References</strong></p>
<ul>
<li>Statistics: New Foundations, Toolbox, and Machine Learning Recipes</li>
<li>Deep Learning and Computer Vision with CNNs</li>
<li>Getting Started with TensorFlow 2.0</li>
<li>Classification and Regression in a Weekend</li>
<li>Online Encyclopedia of Statistical Science</li>
<li>Azure Machine Learning in a Weekend</li>
<li>Enterprise AI - An Application Perspective</li>
<li>Applied Stochastic Processes</li>
<li>Comprehensive Repository of Data Science and ML Resources</li>
<li>Foundations of ML and Data Science for Developers</li>
<li>Elegant Representation of Forward/Back Propagation in Neural Networks</li>
<li>Learning the Math of Data Science</li>
</ul>
<p>To access all these documents and more, <a href="https://www.datasciencecentral.com/profiles/blogs/40-tutorials-covering-all-aspects-of-machine-learning" target="_blank" rel="noopener">follow this link</a>.</p>Surprising Uses of Synthetic Random Data Setstag:www.analyticbridge.datasciencecentral.com,2019-10-02:2004291:BlogPost:3947462019-10-02T23:00:00.000ZVincent Granvillehttps://www.analyticbridge.datasciencecentral.com/profile/VincentGranville
<p>I have used synthetic data sets many times for simulation purposes, most recently in my articles<span> </span><a href="https://www.datasciencecentral.com/profiles/blogs/six-degrees-of-separation-between-any-two-data-sets" rel="noopener" target="_blank">Six degrees of Separations between any two Datasets</a><span> </span>and<span> </span><a href="https://www.datasciencecentral.com/profiles/blogs/how-to-lie-with-p-values" rel="noopener" target="_blank">How to Lie with p-values</a>. Many…</p>
<p>I have used synthetic data sets many times for simulation purposes, most recently in my articles<span> </span><a href="https://www.datasciencecentral.com/profiles/blogs/six-degrees-of-separation-between-any-two-data-sets" target="_blank" rel="noopener">Six degrees of Separations between any two Datasets</a><span> </span>and<span> </span><a href="https://www.datasciencecentral.com/profiles/blogs/how-to-lie-with-p-values" target="_blank" rel="noopener">How to Lie with p-values</a>. Many applications (including the data sets themselves) can be found in my books<span> </span><a href="https://www.datasciencecentral.com/profiles/blogs/fee-book-applied-stochastic-processes" target="_blank" rel="noopener">Applied Stochastic Processes</a><span> </span>and<span> </span><a href="https://www.datasciencecentral.com/profiles/blogs/free-book-statistics-new-foundations-toolbox-and-machine-learning" target="_blank" rel="noopener">New Foundations of Statistical Science</a>. For instance, these data sets can be used to benchmark some statistical tests of hypothesis (the null hypothesis known to be true or false in advance) and to assess the power of such tests or confidence intervals. In other cases, it is used to simulate clusters and test cluster detection / pattern detection algorithms, see<span> </span><a href="https://www.analyticbridge.datasciencecentral.com/profiles/blogs/how-to-detect-a-pattern-problem-and-solution" target="_blank" rel="noopener">here</a>. I also used such data sets to discover two new deep conjectures in number theory (see<span> </span><a href="https://www.datasciencecentral.com/profiles/blogs/two-new-deep-conjectures-in-probabilistic-number-theory" target="_blank" rel="noopener">here</a>), to design new Fintech models such as<span> </span><em>bounded Brownian motions</em>, and find new families of statistical distributions (see<span> </span><a href="https://www.datasciencecentral.com/profiles/blogs/a-strange-family-of-statistical-distributions" target="_blank" rel="noopener">here</a>).</p>
<p><a href="https://storage.ning.com/topology/rest/1.0/file/get/3641314354?profile=original" target="_blank" rel="noopener"><img src="https://storage.ning.com/topology/rest/1.0/file/get/3641314354?profile=RESIZE_710x" class="align-center"/></a></p>
<p style="text-align: center;"><em>Goldbach's comet </em></p>
<p>In this article, I focus on peculiar random data sets to prove -- heuristically -- two of the most famous math conjectures in number theory, related to prime numbers: the Twin Prime conjecture, and the Goldbach conjecture. The methodology is at the intersection of probability theory, experimental math, and probabilistic number theory. It involves working with infinite data sets, dwarfing any data set found in any business context.</p>
<p>Read full article <a href="https://www.datasciencecentral.com/profiles/blogs/surprising-uses-of-synthetic-random-data-sets?xg_source=activity" target="_blank" rel="noopener">here</a>. </p>Six Degrees of Separation Between Any Two Data Setstag:www.analyticbridge.datasciencecentral.com,2019-09-09:2004291:BlogPost:3943772019-09-09T16:30:00.000ZVincent Granvillehttps://www.analyticbridge.datasciencecentral.com/profile/VincentGranville
<p>This is an interesting data science conjecture, inspired by the well known<span> </span><a href="https://www.bigdatanews.datasciencecentral.com/profiles/blogs/graph-theory-six-degrees-of-separation-problem" rel="noopener" target="_blank">six degrees of separation problem</a>, stating that there is a link involving no more than 6 connections between any two people on Earth, say between you and anyone living (say) in North Korea. </p>
<p>Here the link is between any two univariate data sets…</p>
<p>This is an interesting data science conjecture, inspired by the well known<span> </span><a href="https://www.bigdatanews.datasciencecentral.com/profiles/blogs/graph-theory-six-degrees-of-separation-problem" target="_blank" rel="noopener">six degrees of separation problem</a>, stating that there is a link involving no more than 6 connections between any two people on Earth, say between you and anyone living (say) in North Korea. </p>
<p>Here the link is between any two univariate data sets of the same size, say Data A and Data B. The claim is that there is a chain involving no more than 6 intermediary data sets, each highly correlated to the previous one (with a correlation above 0.8), between Data A and Data B. The concept is illustrated in the example below, where only 4 intermediary data sets (labeled Degree 1, Degree 2, Degree 3, and Degree 4) are actually needed. </p>
<p><img src="https://storage.ning.com/topology/rest/1.0/file/get/3547469050?profile=RESIZE_710x" class="align-center"/></p>
<p style="text-align: center;"><em>Correlation table for the 6 data sets</em></p>
<p>The view the (random) data sets, understand how the chain of intermediary data sets was built, and access the spreadsheets to reproduce the results or test on different data, <a href="https://www.datasciencecentral.com/profiles/blogs/six-degrees-of-separation-between-any-two-data-sets" target="_blank" rel="noopener">follow this link</a>. I<span>t makes for an interesting theoretical data science research project, for people with too much free time on their hands. </span></p>Two New Deep Conjectures in Probabilistic Number Theorytag:www.analyticbridge.datasciencecentral.com,2019-09-08:2004291:BlogPost:3941282019-09-08T10:09:38.000ZVincent Granvillehttps://www.analyticbridge.datasciencecentral.com/profile/VincentGranville
<p>The material discussed here is also of interest to machine learning, AI, big data, and data science practitioners, as much of the work is based on heavy data processing, algorithms, efficient coding, testing, and experimentation. Also, it's not just two new conjectures, but paths and suggestions to solve these problems. The last section contains a few new, original exercises, some with solutions, and may be useful to students, researchers, and instructors offering math and statistics classes…</p>
<p>The material discussed here is also of interest to machine learning, AI, big data, and data science practitioners, as much of the work is based on heavy data processing, algorithms, efficient coding, testing, and experimentation. Also, it's not just two new conjectures, but paths and suggestions to solve these problems. The last section contains a few new, original exercises, some with solutions, and may be useful to students, researchers, and instructors offering math and statistics classes at the college level: they range from easy to very difficult. Some great probability theorems are also discussed, in layman's terms: see section 1.2. </p>
<p><a href="https://storage.ning.com/topology/rest/1.0/file/get/3546311327?profile=original" target="_blank" rel="noopener"><img src="https://storage.ning.com/topology/rest/1.0/file/get/3546311327?profile=RESIZE_710x" class="align-center"/></a></p>
<p>The two deep conjectures highlighted in this article (conjectures B and C) are related to the digit distribution of well known math constants such as Pi or log 2, with an emphasis on binary digits of SQRT(2). This is an old problem, one of the most famous ones in mathematics, still unsolved today.</p>
<p><strong>Content of this article</strong></p>
<p>A Strange Recursive Formula</p>
<ul>
<li>Conjecture A</li>
<li>A deeper result</li>
<li>Conjecture B</li>
<li>Connection to the Berry-Esseen theorem</li>
<li>Potential path to solving this problem</li>
</ul>
<p>Potential Solution Based on Special Rational Number Sequences</p>
<ul>
<li>Interesting statistical result</li>
<li>Conjecture C</li>
<li>Another curious statistical result</li>
</ul>
<p>Exercises</p>
<p><em>Read the full article <a href="https://www.datasciencecentral.com/profiles/blogs/two-new-deep-conjectures-in-probabilistic-number-theory" target="_blank" rel="noopener">here</a>. </em></p>10 Machine Learning Methods that Every Data Scientist Should Knowtag:www.analyticbridge.datasciencecentral.com,2019-08-30:2004291:BlogPost:3944382019-08-30T17:08:12.000ZVincent Granvillehttps://www.analyticbridge.datasciencecentral.com/profile/VincentGranville
<p class="nj nk eo ao nl b nm nn no np nq nr ns nt nu nv nw" id="a572">Machine learning is a hot topic in research and industry, with new methodologies developed all the time. The speed and complexity of the field makes keeping up with new techniques difficult even for experts — and potentially overwhelming for beginners.</p>
<p class="nj nk eo ao nl b nm nn no np nq nr ns nt nu nv nw" id="0d4d">To demystify machine learning and to offer a learning path for those who are new to the core…</p>
<p id="a572" class="nj nk eo ao nl b nm nn no np nq nr ns nt nu nv nw">Machine learning is a hot topic in research and industry, with new methodologies developed all the time. The speed and complexity of the field makes keeping up with new techniques difficult even for experts — and potentially overwhelming for beginners.</p>
<p id="0d4d" class="nj nk eo ao nl b nm nn no np nq nr ns nt nu nv nw">To demystify machine learning and to offer a learning path for those who are new to the core concepts, let’s look at ten different methods, including simple descriptions, visualizations, and examples for each one.</p>
<p class="nj nk eo ao nl b nm nn no np nq nr ns nt nu nv nw"><a href="https://storage.ning.com/topology/rest/1.0/file/get/3487793979?profile=original" target="_blank" rel="noopener"><img src="https://storage.ning.com/topology/rest/1.0/file/get/3487793979?profile=RESIZE_710x" class="align-center"/></a></p>
<p id="64a5" class="nj nk eo ao nl b nm nn no np nq nr ns nt nu nv nw">A machine learning algorithm, also called model, is a mathematical expression that represents data in the context of a problem, often a business problem. The aim is to go from data to insight. For example, if an online retailer wants to anticipate sales for the next quarter, they might use a machine learning algorithm that predicts those sales based on past sales and other relevant data. Similarly, a windmill manufacturer might visually monitor important equipment and feed the video data through algorithms trained to identify dangerous cracks.</p>
<p id="00c2" class="nj nk eo ao nl b nm nn no np nq nr ns nt nu nv nw">The ten methods described offer an overview — and a foundation you can build on as you hone your machine learning knowledge and skill:</p>
<ol class="">
<li id="b886" class="nj nk eo ao nl b nm nn no np nq nr ns nt nu nv nw nx ny nz">Regression</li>
<li id="2763" class="nj nk eo ao nl b nm ob no oc nq od ns oe nu of nw nx ny nz">Classification</li>
<li id="54dd" class="nj nk eo ao nl b nm ob no oc nq od ns oe nu of nw nx ny nz">Clustering</li>
<li id="c007" class="nj nk eo ao nl b nm ob no oc nq od ns oe nu of nw nx ny nz">Dimensionality Reduction</li>
<li id="1af1" class="nj nk eo ao nl b nm ob no oc nq od ns oe nu of nw nx ny nz">Ensemble Methods</li>
<li id="91ed" class="nj nk eo ao nl b nm ob no oc nq od ns oe nu of nw nx ny nz">Neural Nets and Deep Learning</li>
<li id="5128" class="nj nk eo ao nl b nm ob no oc nq od ns oe nu of nw nx ny nz">Transfer Learning</li>
<li id="2251" class="nj nk eo ao nl b nm ob no oc nq od ns oe nu of nw nx ny nz">Reinforcement Learning</li>
<li id="6975" class="nj nk eo ao nl b nm ob no oc nq od ns oe nu of nw nx ny nz">Natural Language Processing</li>
<li id="429f" class="nj nk eo ao nl b nm ob no oc nq od ns oe nu of nw nx ny nz">Word Embeddings</li>
</ol>
<p><em>Read the full article, with detailed description for each method, <a href="https://www.datasciencecentral.com/profiles/blogs/10-machine-learning-methods-that-every-data-scientist-should-know" target="_blank" rel="noopener">here</a>. </em></p>