<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>Parker Tichko</title>
    <description>Website Parker Tichko</description>
    <link>http://ptichko.github.io/</link>
    <atom:link href="http://ptichko.github.io/feed.xml" rel="self" type="application/rss+xml" />
    <pubDate>Wed, 03 Sep 2025 22:24:17 +0000</pubDate>
    <lastBuildDate>Wed, 03 Sep 2025 22:24:17 +0000</lastBuildDate>
    <generator>Jekyll v3.10.0</generator>
    
      <item>
        <title>Parker Tichko - Music For Ferns (2024)</title>
        <description>&lt;h1 id=&quot;music-for-ferns-2024&quot;&gt;Music For Ferns (2024)&lt;/h1&gt;

&lt;iframe style=&quot;border: 0; width: 400px; height: 373px;&quot; src=&quot;https://bandcamp.com/EmbeddedPlayer/album=2824364248/size=large/bgcol=ffffff/linkcol=2ebd35/artwork=small/transparent=true/&quot; seamless=&quot;&quot;&gt;&lt;a href=&quot;https://parkertichko.bandcamp.com/album/music-for-ferns&quot;&gt;Music For Ferns by Parker Tichko&lt;/a&gt;&lt;/iframe&gt;

&lt;p&gt;Music For Ferns (2024) is out today. The mini-album is a short instrumental cycle of synthesizer and electronic music dedicated to the ontogenetic life-cycle of the fern. The music features soft, warm, organic, and, at times, shimmering tones created with a variety of analog synthesizers from the 1970s/1980s for plants (and people) to listen to. You can stream it on &lt;a href=&quot;https://open.spotify.com/album/4QTmRmxs9WUQ5iCbSsw76R?si=JE0bu4-fSQeRW6Fe7Dvjkg_&quot;&gt;Spotify&lt;/a&gt;, Apple Music, etc., or support the project directly by purchasing the album on &lt;a href=&quot;https://parkertichko.bandcamp.com/album/music-for-ferns&quot;&gt;Bandcamp&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Finally, you can watch the official music video on YouTube that features the music of Music For Ferns set to videography of wild cinnamon ferns in a New England forest during late September 2024:&lt;/p&gt;

&lt;iframe width=&quot;560&quot; height=&quot;315&quot; src=&quot;https://www.youtube.com/embed/SzMoUj6B7Sk?si=XB_PU8rDk22t4jDb&quot; title=&quot;YouTube video player&quot; frameborder=&quot;0&quot; allow=&quot;accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share&quot; referrerpolicy=&quot;strict-origin-when-cross-origin&quot; allowfullscreen=&quot;&quot;&gt;&lt;/iframe&gt;

&lt;h2 id=&quot;background&quot;&gt;Background&lt;/h2&gt;

&lt;p&gt;During the pandemic, I discovered a sub-genre of early electronic music written for plants that originated with Mort Garson’s now cult-classic 1976 release of &lt;a href=&quot;https://en.wikipedia.org/wiki/Mother_Earth%27s_Plantasia&quot;&gt;Mother Earth’s Plantasia&lt;/a&gt;. I fell in love with the warm, organic tones of plant music that was largely composed with analog synthesizers from the 1970s and 1980s. Music For Ferns continues this tradition by creating vintage synthesizer music using instruments sourced from the same decades, including a vintage minimoog model d from 1973, a yamaha cs-40m from 1979, a roland juno-60, a sequential circuits six-trak, a roland d-50, and an ensoniq sq-80.&lt;/p&gt;

&lt;iframe style=&quot;border-radius:12px&quot; src=&quot;https://open.spotify.com/embed/playlist/33e0vwPao0XMwNPTAkfTkw?utm_source=generator&quot; width=&quot;100%&quot; height=&quot;352&quot; frameborder=&quot;0&quot; allowfullscreen=&quot;&quot; allow=&quot;autoplay; clipboard-write; encrypted-media; fullscreen; picture-in-picture&quot; loading=&quot;lazy&quot;&gt;&lt;/iframe&gt;

&lt;p&gt;In addition to musical sources, Music For Ferns was also inspired by the lush cinnamon ferns of New England. Shown here is photography by my brother, Brant Tichko, who felt inspired to capture a few plants in their New-England ecology:&lt;/p&gt;

&lt;p align=&quot;center&quot;&gt;
  &lt;img src=&quot;/img/fern1.jpeg&quot; /&gt;
  &lt;figcaption&gt;
  &lt;font size=&quot;2&quot;&gt;Wild cinnamon ferns in a New England forest. Photography by Brant Tichko.&lt;/font&gt;
&lt;/figcaption&gt;
&lt;/p&gt;

</description>
        <pubDate>Wed, 02 Oct 2024 00:00:00 +0000</pubDate>
        <link>http://ptichko.github.io/2024/10/02/Music-For-Ferns-(2024).html</link>
        <guid isPermaLink="true">http://ptichko.github.io/2024/10/02/Music-For-Ferns-(2024).html</guid>
        
        
      </item>
    
      <item>
        <title> Simulating Adaptive-Frequency Oscillators in MATLAB</title>
        <description>&lt;h1 id=&quot;dynamical-systems-models-of-adaptive-frequency-oscillators&quot;&gt;Dynamical Systems Models of Adaptive-Frequency Oscillators&lt;/h1&gt;

&lt;p&gt;Non-linear oscillators have become widely adopted in cognitive science as models of &lt;em&gt;synchronization&lt;/em&gt; and &lt;em&gt;entrainment&lt;/em&gt;—a dynamic process in which a system’s activity aligns in time with an external, time-varying input signal. Indeed, many systems that are of interest to cognitive scientists, across multiple scales of organization, exhibit a remarkable ability to synchronize their behavior to time-varying signals, such as the synchronized activity of neural ensembles to sensory stimulation, synchronized human action to auditory rhythms (e.g., music), and the macro-scopic synchronized activity of large social groups, such as fireflies and drum circles. Models of oscillation are particularly suited to explain these kinds of synchronized phenomena, as they possess all kinds of synchronization dynamics, such as phase-, mode-, and frequency-locking, that emerge naturally from dynamical laws that govern their motion and their coupling to input signals. Such oscillatory dynamics may reflect the physical principles that underlie how neural systems, agents, and social groups coordinate their activity over time.&lt;/p&gt;

&lt;p&gt;There are cases, however, in which an oscillator might fail to synchronize to an input signal. One case is when the frequency of an input signal falls outside the entrainment basin of an oscillator, prohibiting the oscillator to enter a synchronized state with the input signal. In a 2006 paper, &lt;a href=&quot;https://doi.org/10.1016/j.physd.2006.02.009&quot;&gt;Righetti, Buchli, &amp;amp; Ijspeert (2006)&lt;/a&gt; proposed a Hebbian learning rule for several classes of oscillator models that allows an oscillator to learn the frequency of an external input signal, even for input frequencies that would normally fall outside the entrainment basin of a fixed-frequency oscillator. When equipped with the learning rule, an oscillator will adjust its natural frequency to a target frequency component reflected in an external input signal. In the case of sinusoidal forcing, the adaptive-frequency oscillator will tune its natural frequency to the fundamental frequency (F0) of the input. If the input signal is more complex—i.e., it contains multiple constituent frequencies—then the learning rule enables the oscillator to synchronize to one of input signal’s frequency components, depending on the initial natural frequency of the oscillator. Furthermore, if the oscillator itself contains multiple frequency components (e.g., a relaxation oscillator), then the learning rule will tune one of the oscillator’s frequency components to a frequency in the input signal.&lt;/p&gt;

&lt;p&gt;Recently, I began to implement Righetti, Buchli, &amp;amp; Ijspeert (2006)’s Hebbian learning rule in MATLAB for several classes of oscillators analyzed in their manuscript: the Hopf oscillator, the Van der Pol oscillator, the Rayleigh oscillator, the Fitzhugh-Nagumo oscillator, and the Rossler strange attractor. You can track my progress by downloading my code on my &lt;a href=&quot;https://github.com/ptichko/Adaptive-Frequency-Oscillators&quot;&gt;github page&lt;/a&gt;. In this post, I share some initial simulations of the Hebbian learning rule with the first of the oscillator models presented in the manuscript—the Hopf oscillator.&lt;/p&gt;

&lt;h2 id=&quot;simulations-of-an-adaptive-frequency-hopf-oscillator&quot;&gt;Simulations Of An Adaptive-Frequency Hopf Oscillator&lt;/h2&gt;

&lt;p&gt;The Hopf oscillator is a non-linear oscillator that spontaneously oscillates, i.e., spontaneously enters a limit cycle, producing a non-zero amplitude. The equations of motion for the Hopf oscillator are given by the following system of ODEs, here represented in Cartesian coordinates:&lt;/p&gt;

&lt;p align=&quot;center&quot;&gt;
&lt;img src=&quot;https://latex.codecogs.com/svg.image?\begin{array}{l}\dot{x}=\left(\mu-r^{2}\right)&amp;space;x-\omega&amp;space;y&amp;plus;\epsilon&amp;space;F&amp;space;\\\dot{y}=\left(\mu-r^{2}\right)&amp;space;y&amp;plus;\omega&amp;space;x\end{array}&quot; /&gt;
&lt;/p&gt;

&lt;p&gt;Where r = sqrt(x^2 + y^2), mu &amp;gt; 0, F is the input signal, omega is oscillator natural frequency, and epsilon is a coupling coefficient to the input signal (and the learning rate; see below). Righetti et al., (2006) introduces a Hebbian learning rule for the Hopf oscillator that takes the following form:&lt;/p&gt;

&lt;p align=&quot;center&quot;&gt;
&lt;img src=&quot;https://latex.codecogs.com/svg.image?\dot{\omega}=-\epsilon&amp;space;F&amp;space;\frac{y}{\sqrt{x^{2}&amp;plus;y^{2}}}&quot; /&gt;
&lt;/p&gt;

&lt;p&gt;The learning rule governs the dynamics of omega, which is the control parameter for oscillator natural frequency in the Hopf oscillator. F, again, is the input signal, and epsilon controls the learning rate of the system. Below, I run several numerical simulations of a Hopf oscillator with adaptive-frequency dynamics to qualitatively assess the learning dynamics of the Hebbian learning rule. We start by simulating the frequency adaption of the Hopf oscillator for several initial conditions of the oscillator’s natural frequency (omega_0) to observe whether the oscillator correctly “learns” the frequency of the external input signal. In this simulation, a Hopf oscillator is being driven by periodic forcing at 30 Hz. Examining the dynamics of oscillator frequency for several initial conditions (omega_0 = 18, 26, 36, 40 Hz), we find that, for all initial conditions, the frequency of the oscillator converges to the frequency of the external input signal—30 Hz. (Moreover, for all initial conditions, there is a momentary increase in the variability of oscillator frequency right before the oscillator synchronizes to the external signal at its learned frequency.)&lt;/p&gt;

&lt;p align=&quot;center&quot;&gt;
  &lt;img src=&quot;/img/Hopf_MultipleW0s.png&quot; /&gt;
  &lt;figcaption&gt;
                &lt;font size=&quot;2&quot;&gt;Simulation of an adaptive-frequency Hopf oscillator with multiple intitial conditions for oscillator natural frequency (omega).
				Here, x = 0, y = 1, e = 1, and m = 1, with cos(30t) as the input signal. &lt;/font&gt;
&lt;/figcaption&gt;
&lt;/p&gt;

&lt;p&gt;In the time domain, we can also clearly identify the moment when an oscillator learns the frequency of the input signal and enters a phase-locked relationship with the input signal. Let’s run a similar simulation with a slower input signal of 3 Hz, as we can more readily observe the dynamics of frequency adaptation in the time. With an initial condition of omega_0 = 10 Hz, we observe that an initial 10-Hz Hopf oscillator successfully “learns” the frequency of the 3-Hz input signal, as evinced by the dynamics of the oscillator’s natural frequency (i.e., omega). This learning is also evident in the time domain: as the Hopf oscillator nears the moment of synchronization (time 120 - 140), the phase of the oscillator fluctuates wildly before settling in lock-step with the driving signal. (The gif below shows the simulation from time 120 – 140, right before and after the oscillator learns the frequency of the input signal.)&lt;/p&gt;

&lt;p align=&quot;center&quot;&gt;
&lt;img src=&quot;/img/Hopf_PhaseP2.gif&quot; /&gt;
&lt;figcaption&gt;  &lt;font size=&quot;2&quot;&gt;Simulation of an adaptive-frequency Hopf oscillator that learns the frequency of a 3-Hz input signal. Top: Trajectory through phase space of the Hopf oscillator. Middle: Changes in oscillator naturally frequency (i.e., omega) over time. The horizontal dashed line denotes the target frequency of 3 Hz.
Bottom: Time series of the Hopf oscillator (y component, purple line) and the 3-Hz input signal (dashed line). Here, x = 0, y = 1, e = 1, and m = 1. &lt;/font&gt;
&lt;/figcaption&gt;
&lt;/p&gt;

&lt;p&gt;Next, we can investigate the effect of the learning rate, the epsilon parameter, on the dynamics of frequency adaptation. I aimed to replicate Figure 2 from the manuscript, which reports the effect of increasing the learning rate on the dynamics of frequency adaption. Similar to our first simulation, the oscillator is being driven by periodic forcing at 30 Hz, but now we vary the epsilon parameter, which controls the learning rate of the system. The initial condition of oscillator natural frequency is set to 40 Hz for several learning rates. Unsurprisingly, a slower learning rate requires more time for the oscillator to learn the frequency of the external signal (epsilon = 1 converges &amp;lt; 500 time, while epsilon = 0.4 converges &amp;gt; 2000 time). However, it is clear that the learning rate controls the overall timescale of frequency adaption.&lt;/p&gt;

&lt;p align=&quot;center&quot;&gt;
  &lt;img src=&quot;/img/Righetti_Fig2.png&quot; /&gt;
   &lt;figcaption&gt;
                &lt;font size=&quot;2&quot;&gt; Replicating figure 2 from Righetti et al., (2006). The effect of different learning rates (epsilon) on frequency adaptation. &lt;/font&gt;
&lt;/figcaption&gt;
&lt;/p&gt;

&lt;p&gt;Finally, I explored whether an adaptive-frequency Hopf oscillator can learn the frequency content of a complex input signal. First, we create a complex signal containing multiple frequencies; in this example, a complex waveform containing a fundamental frequency of 3 Hz (F0) and two harmonics at 6 Hz (F1) and 9 Hz (F2). We simulate the model for the initial conditions, omego_0 = 1, 4, 5, 10 Hz. Plotting omega over time for each initial condition, we see that the 1-Hz oscillator learns the 3-Hz component of the input signal. This is also the result for the 4 Hz oscillator. (Interestingly though, the 4-Hz oscillator starts to increase in frequency during the beginning of the simulation until it tunes its natural frequency to  ~ 5 Hz, then it slows down, heading towards the 3 Hz component of the input signal). The 5-Hz oscillator learns the 6-Hz component, and the 10-Hz oscillator learns the 9-Hz component.&lt;/p&gt;

&lt;p align=&quot;center&quot;&gt;
  &lt;img src=&quot;/img/Hopf_MultiFreq.png&quot; /&gt;
   &lt;figcaption&gt;
                &lt;font size=&quot;2&quot;&gt; Adaptive-frequency dynamics for a complex input signal. Depending on the Hopf oscillator&apos;s initial natural frequency (omega), the oscillator will &quot;learn&quot; a different frequency component of the input signal (horizontal dashed lines).
				Here, x = 0, y = 1, e = 1, and m = 1. &lt;/font&gt;
&lt;/figcaption&gt;
&lt;/p&gt;

&lt;p&gt;In the time domain, we can clearly see how the oscillators align with the events in the complex waveform. For instance, after learning the new frequency, the 1-Hz oscillator phase-locks to the second high-amplitude event; the 4-Hz oscillator also phase-locks to the second high-amplitude event; the 5-Hz oscillator phase locks to both high-amplitude events; and the 10-Hz oscillator phase-locks to the all the events (e.g., those low-amplitude peaks and the high-amplitude peaks).&lt;/p&gt;

&lt;p align=&quot;center&quot;&gt;
  &lt;img src=&quot;/img/Hopf_MultiFreqTimeDomain.png&quot; /&gt;
     &lt;figcaption&gt;
                &lt;font size=&quot;2&quot;&gt; Time series of adaptive-frequency dynamics of Hopf oscillator (y component) to a complex input signal. Different Hopf oscillators phase-lock to different events in the complex waveform.
				Here, x = 0, y = 1, e = 1, and m = 1. &lt;/font&gt;
&lt;/figcaption&gt;
&lt;/p&gt;

</description>
        <pubDate>Mon, 14 Mar 2022 00:00:00 +0000</pubDate>
        <link>http://ptichko.github.io/2022/03/14/Adaptive-Frequency-Oscillators.html</link>
        <guid isPermaLink="true">http://ptichko.github.io/2022/03/14/Adaptive-Frequency-Oscillators.html</guid>
        
        
      </item>
    
      <item>
        <title>Fitting An Exponential Regression To Google Citation Data</title>
        <description>&lt;h1 id=&quot;fitting-an-exponential-regression-to-google-citation-data&quot;&gt;Fitting An Exponential Regression To Google Citation Data&lt;/h1&gt;

&lt;p&gt;In this post, I show how the &lt;a href=&quot;https://cran.r-project.org/web/packages/scholar/index.html&quot;&gt;scholar&lt;/a&gt; library can be used to explore historical citation data archived on Google Scholar in R. Using the scholar library, we import citation-related data, beginning in the year 1982, for two of the most important physicists of the 20th century – Stephen Hawking and Richard Feynman – and examine how their total citations evolved over time. For added fun, we fit a non-linear, exponential regression model to model their respective trends of citations over time.&lt;/p&gt;

&lt;p&gt;First, we load the scholar library, locate the identification numbers for Hawking and Feynman (the identification numbers for authors on Google Scholar can be located in the URL for the author’s Google Scholar page), and use the compare_scholar_careers() function to import citation data for Hawking and Feynman beginning in the year 1982 and up to the most recent year.&lt;/p&gt;

&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;library(scholar)
library(tidyverse)


#  Richard Feynman&apos;s and Stephen Hawking&apos;s IDs
ids &amp;lt;- c(&quot;B7vSqZsAAAAJ&quot;, &quot;qj74uXkAAAAJ&quot;)

# Import Google Scholar data
df &amp;lt;- compare_scholar_careers(ids)


head(df)
            id year cites career_year            name
1 B7vSqZsAAAAJ 1982   581           0 Richard Feynman
2 B7vSqZsAAAAJ 1983   605           1 Richard Feynman
3 B7vSqZsAAAAJ 1984   657           2 Richard Feynman
4 B7vSqZsAAAAJ 1985   644           3 Richard Feynman
5 B7vSqZsAAAAJ 1986   726           4 Richard Feynman
6 B7vSqZsAAAAJ 1987   718           5 Richard Feynman

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;Plotting the total number of citations since 1982 (here we use the career_year variable generated by compare_scholar_careers(), which standardizes the year 1982 as the start of the career), we observe a positive trend in citations that appears to rise in an exponential manner. Moreover, at least from 1982, Hawking appears to eclipse Feynman in terms of absolute number of total citations.&lt;/p&gt;

&lt;p align=&quot;center&quot;&gt;
  &lt;img src=&quot;/img/feynmanhawking.png&quot; /&gt;
&lt;/p&gt;

&lt;h2 id=&quot;modelling-stephen-hawkings-and-richard-feynmans-citation-history&quot;&gt;Modelling Stephen Hawking’s and Richard Feynman’s Citation History&lt;/h2&gt;

&lt;p&gt;From plotting the data, the total citations from Hawking and Feynman both appear to follow an exponential trend over the course of their citation history. We can try fitting a non-linear regression model for each author, specifically estimating parameters for an exponential model of the form:&lt;/p&gt;

&lt;p align=&quot;center&quot;&gt;
&lt;img src=&quot;https://render.githubusercontent.com/render/math?math=y^{\prime}=\alpha e^{\beta x}&quot; /&gt;
&lt;/p&gt;

&lt;p&gt;where y-prime is the predicted number of citations since the career start, x is the number of years since career start (x = 0 would reflect the year 1982 in this example), and alpha and beta are parameters to be estimated.&lt;/p&gt;

&lt;p&gt;We use the nls() in R for fitting non-linear models via a non-linear least-squares method, where we can specify the above formula for an exponential model and supply initial parameters for alpha and beta for the optimization procedure. To derive initial parameters, we, first, fit a simple linear regression model on the citation data using log-transformed citation data. (If the trend is truly exponential, log-transforming the dependent variable should yield a linear relationship between log(y) and x.) We, then, save the coefficients of this model and use them as starting parameters in the initial instantiation of our nls() model. Finally, we plot the predicted values of the exponential model against the citation data from Google Scholar to visually examine the fit of the model against the empirical data.&lt;/p&gt;

&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;# Fit exponential model to Feynman

df.feynman &amp;lt;- df %&amp;gt;%
  mutate(cites.log = log(cites)) %&amp;gt;% # log transform cites for simple linear regression
  filter(name == &quot;Richard Feynman&quot;) %&amp;gt;%
  filter(career_year != max(career_year)) # remove latest year


m.lm &amp;lt;- lm(cites.log ~ career_year, data = df.feynman) # linear regression to get starting coefficients
st &amp;lt;- list(a = exp(coef(m.lm )[1]), b = coef(m.lm )[2]) # intercept (remember to take exp()) and slope coefficients
m.exp &amp;lt;- nls(cites ~ I(a*exp(b*career_year)), data=df.feynman, start=st, trace=T) # non-linear regression with least squares

m.exp.fitted &amp;lt;-fitted(m.exp) # save fitted values

# Summary
summary(m.exp)

Formula: cites ~ I(a * exp(b * career_year))

Parameters:
   Estimate Std. Error t value Pr(&amp;gt;|t|)    
a 7.194e+02  4.138e+01   17.39   &amp;lt;2e-16 ***
b 5.217e-02  1.876e-03   27.81   &amp;lt;2e-16 ***
---
Signif. codes:  0 &apos;***&apos; 0.001 &apos;**&apos; 0.01 &apos;*&apos; 0.05 &apos;.&apos; 0.1 &apos; &apos; 1

Residual standard error: 248.5 on 37 degrees of freedom

Number of iterations to convergence: 5 
Achieved convergence tolerance: 7.981e-06


# coefficients
coef(m.exp)
           a            b 
719.37917740   0.05217138

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;The parameters of the estimated exponential model were found to be significant, suggesting that an exponential model of the following form captures the trend observed in Feynman’s citation data beginning in 1982:&lt;/p&gt;

&lt;p align=&quot;center&quot;&gt;
&lt;img src=&quot;https://render.githubusercontent.com/render/math?math=y^{\prime}= 719e^{0.052x}&quot; /&gt;
&lt;/p&gt;

&lt;p&gt;Visualizing the data against the model prediction, we find that the predicted values from model, denoted by the pink dashed line, seems to capture most of the the exponential trend, albeit the tails of the trend.&lt;/p&gt;

&lt;p align=&quot;center&quot;&gt;
  &lt;img src=&quot;/img/feynmanmodel.png&quot; /&gt;
&lt;/p&gt;

&lt;p&gt;Repeating the same procedure to model the Hawking data, we get:&lt;/p&gt;

&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;# Fit exponential model to Hawking


df.hawking &amp;lt;- df %&amp;gt;%
  mutate(cites.log = log(cites)) %&amp;gt;% # log transform cites for simple linear regression
  filter(name == &quot;Stephen Hawking&quot;) %&amp;gt;%
  filter(career_year != max(career_year)) # remove latest year


m.lm &amp;lt;- lm(cites.log ~ career_year, data = df.hawking) # linear regression to get starting coefficients
st &amp;lt;- list(a = exp(coef(m.lm)[1]), b = coef(m.lm)[2]) # intercept (remember to take exp()) and slope coefficients
m.exp &amp;lt;- nls(cites ~ I(a*exp(b*career_year)), data=df.hawking, start=st, trace=T) # non-linear regression with least squares

m.exp.fitted &amp;lt;-fitted(m.exp) # save fitted values

# Summary
summary(m.exp)


Formula: cites ~ I(a * exp(b * career_year))

Parameters:
   Estimate Std. Error t value Pr(&amp;gt;|t|)    
a 8.240e+02  2.725e+01   30.24   &amp;lt;2e-16 ***
b 5.379e-02  1.073e-03   50.15   &amp;lt;2e-16 ***
---
Signif. codes:  0 &apos;***&apos; 0.001 &apos;**&apos; 0.01 &apos;*&apos; 0.05 &apos;.&apos; 0.1 &apos; &apos; 1

Residual standard error: 167.9 on 37 degrees of freedom

Number of iterations to convergence: 4 
Achieved convergence tolerance: 7.586e-07


# Coefficients
           a            b 
823.96882888   0.05378992 

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;Similar to the Feynman model, the estimated parameters for the Hawking model were also significant, yielding an exponential model with the following parameters:&lt;/p&gt;

&lt;p align=&quot;center&quot;&gt;
&lt;img src=&quot;https://render.githubusercontent.com/render/math?math=y^{\prime}= 823e^{0.053x}&quot; /&gt;
&lt;/p&gt;

&lt;p&gt;Visualizing the data against the model prediction, we find that the Hawking model, again, denoted by the pink dashed line, captures the exponential trend quite well.&lt;/p&gt;

&lt;p align=&quot;center&quot;&gt;
  &lt;img src=&quot;/img/hawkingmodel.png&quot; /&gt;
&lt;/p&gt;

</description>
        <pubDate>Wed, 19 May 2021 00:00:00 +0000</pubDate>
        <link>http://ptichko.github.io/2021/05/19/Fitting-An-Exponential-Regression-To-Google-Citation-Data.html</link>
        <guid isPermaLink="true">http://ptichko.github.io/2021/05/19/Fitting-An-Exponential-Regression-To-Google-Citation-Data.html</guid>
        
        
      </item>
    
      <item>
        <title>Pipeline To Calculate D-Prime in R</title>
        <description>&lt;h1 id=&quot;pipeline-to-calculate-d-prime-in-r&quot;&gt;Pipeline To Calculate D-Prime in R&lt;/h1&gt;

&lt;p&gt;One efficient way to calculate d-prime in R is to use the &lt;a href=&quot;https://www.rdocumentation.org/packages/psycho/versions/0.6.1/topics/dprime&quot;&gt;dprime() function&lt;/a&gt; from the psycho library. Prior to using the dprime() function, however, the user must calculate the total numbers of hits, misses, false alarms, and correct rejections for each participant in their experiment. This can be quite cumbersome. For a recent project, I automated this process by writing two R functions that label whether a trial in a experiment is a hit, miss, false alarm, or correct rejection, and tally up the total number of hits, misses, false alarms, and correction rejections by particpant. (Originally, I had intended to write my own custom R functions to directly calculate d-prime using tidyverse functions, but with feedback from Karim Rivera (thanks, Karim!), who found an error with my original approach, I decided to modify my functions to just compute d-prime labels and subscores and leave the direct calculation of d-prime statistics to the psycho library, which is much more flexible method than what I had written.)
Once the total number of hits, misses, false alarms, and correct rejections are calculated for each participant, the dprime() function from the psycho library can be used to calculate several d-prime statistics related to sensitivity and bias.&lt;/p&gt;

&lt;p&gt;You can download the R functions here &lt;a href=&quot;/r/dprime_lab.R&quot; target=&quot;_blank&quot;&gt;&lt;i class=&quot;fa fa-file-text fa-md&quot;&gt;&lt;/i&gt;&lt;/a&gt; and here &lt;a href=&quot;/r/dprime_cat.R&quot; target=&quot;_blank&quot;&gt;&lt;i class=&quot;fa fa-file-text fa-md&quot;&gt;&lt;/i&gt;&lt;/a&gt;, then load them into your R session using the source() function:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; The dprime_cat() function requires the dplyr and tibble libraries.&lt;/p&gt;
&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;source(&quot;dprime_lab.R&quot;) # label individual trials (i.e., rows in a data frame) as a hit, miss, false alarm, or correct rejection
source(&quot;dprime_cat.R&quot;) # total number of hits, misses, false alarms, and correct rejections for each participant
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;As an example, let’s create a data frame reflecting a hypothetical cognitive task. Imagine we ran an experiment where participants were presented with two images, called “stim1” and “stim2,” and they had to determine if the two images were the same or different after a delay period (i.e., a delayed match-to-sample task). The dependent variable, stored in the column “correct,” is whether the particpants correctly determined whether the two images were the same or different for a given trial: 0 means an incorrect response, while 1 denotes a correct response.&lt;/p&gt;

&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;# Create a data frame
df&amp;lt;- data.frame(participant = c(rep(&quot;Participant 1&quot;,24), rep(&quot;Participant 2&quot;,24,), rep(&quot;Participant 3&quot;,24), rep(&quot;Participant 4&quot;,24)),
           stim1 = rep(c(rep(c(&quot;Apple&quot;, &quot;Orange&quot;, &quot;Orange&quot;, &quot;Apple&quot;), 6),
                     rep(c(&quot;Orange&quot;, &quot;Apple&quot;, &quot;Apple&quot;, &quot;Orange&quot;), 6)),2),
           stim2 = rep(c(rep(c(&quot;Apple&quot;, &quot;Orange&quot;, &quot;Apple&quot;, &quot;Orange&quot;), 6),
                     rep(c(&quot;Orange&quot;, &quot;Apple&quot;, &quot;Apple&quot;, &quot;Orange&quot;), 6)),2),
           correct = c(rep(c(1,1,0,0),1), rep(c(1,0,1,0),1),
                       rep(c(0,0,1,1),2), rep(c(0,1,0,1),2),
                       rep(c(0,1,0,1),2), rep(c(0,0,1,1),2),
                       rep(c(1,0,1,0),1), c(rep(c(1,1,0,0),1))))
					   
					   
# Reorder by participant
df &amp;lt;- df[order(df[,&quot;participant&quot;]),] 

# View first rows of data frame 
head(df)
    participant  stim1  stim2 correct
1 Participant 1  Apple  Apple       1
2 Participant 1 Orange Orange       1
3 Participant 1 Orange  Apple       0
4 Participant 1  Apple Orange       0
5 Participant 1  Apple  Apple       1
6 Participant 1 Orange Orange       0
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;First, we can use dprime_lab() to label each trial as a hit, miss, false alarm, or correct rejection. The function returns a data frame with new columns that summarizes whether a particular trial was a hit, miss, false alarm, or correct rejection. Start by calling dprime_lab() and passing through the name of the column with the first image, “stim1”, the name of the column with the second image, “stim2”, then the name of the column with the dependent variable, “correct”, and finally the name of the data frame.&lt;/p&gt;

&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;# Calculate correct rejections, hits, misses, and false alarms
df.dprime &amp;lt;- dprime_lab(&quot;stim1&quot;, &quot;stim2&quot;, &quot;correct&quot;, data = df)

# View first rows of data frame, now with d-prime labels
head(df.dprime)
    participant  stim1  stim2 correct CorrectRej Miss FalseAlarm Hit
1 Participant 1  Apple  Apple       1          1   NA         NA  NA
2 Participant 1 Orange Orange       1          1   NA         NA  NA
3 Participant 1 Orange  Apple       0         NA    1         NA  NA
4 Participant 1  Apple Orange       0         NA    1         NA  NA
5 Participant 1  Apple  Apple       1          1   NA         NA  NA
6 Participant 1 Orange Orange       0         NA   NA          1  NA
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Next, run dprime_cat() to calculate the total number of hits, misses, false alarms, and correct rejections for each participant. Call dprime_cat(), pass through the dataframe with d-prime labels, and then pass through “participant” column (without quotes).
&lt;strong&gt;Note:&lt;/strong&gt; Because this function uses dplyr, you must pass through the names of your variables &lt;strong&gt;without&lt;/strong&gt; quotation marks:&lt;/p&gt;

&lt;p&gt;The output is a tibble that summarizes the the number of correct rejections, hits, misses, and false alarms for each participant:&lt;/p&gt;
&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;# Calculate total number of hits, misses, false alarms, and correct rejections for each participant
df.cat &amp;lt;- dprime_cat(df.dprime, participant)

head(df.cat)
# A tibble: 4 x 8
  participant    Hits Misses FalseAlarms CorrectRejs TotalTarg TotalDis NumRes
  &amp;lt;chr&amp;gt;         &amp;lt;dbl&amp;gt;  &amp;lt;dbl&amp;gt;       &amp;lt;dbl&amp;gt;       &amp;lt;dbl&amp;gt;     &amp;lt;dbl&amp;gt;    &amp;lt;dbl&amp;gt;  &amp;lt;dbl&amp;gt;
1 Participant 1     7      5           7           5        12       12     24
2 Participant 2     0      0          12          12         0       24     24
3 Participant 3     7      5           7           5        12       12     24
4 Participant 4     0      0          12          12         0       24     24
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Finally, you can pass individual columns from the tibble outputted by the dprime_cat() function to the dprime() function from the psycho library to compute d-prime statistics:&lt;/p&gt;

&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;# Finally, use psycho::d-prime to calculate d-prime stats
dprime.stats&amp;lt;-psycho::dprime(df.cat$Hits,df.cat$FalseAlarms, df.cat$Misses, df.cat$CorrectRejs)

# Add d-prime values into df
df.cat$dprime &amp;lt;- dprime.stats$dprime

head(df.cat) # note, in this example all d-prime values are 0
# A tibble: 4 x 9
  participant    Hits Misses FalseAlarms CorrectRejs TotalTarg TotalDis NumRes dprime
  &amp;lt;chr&amp;gt;         &amp;lt;dbl&amp;gt;  &amp;lt;dbl&amp;gt;       &amp;lt;dbl&amp;gt;       &amp;lt;dbl&amp;gt;     &amp;lt;dbl&amp;gt;    &amp;lt;dbl&amp;gt;  &amp;lt;dbl&amp;gt;  &amp;lt;dbl&amp;gt;
1 Participant 1     7      5           7           5        12       12     24      0
2 Participant 2     0      0          12          12         0       24     24      0
3 Participant 3     7      5           7           5        12       12     24      0
4 Participant 4     0      0          12          12         0       24     24      0

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

</description>
        <pubDate>Sat, 27 Feb 2021 00:00:00 +0000</pubDate>
        <link>http://ptichko.github.io/2021/02/27/Pipeline-To-Calculate-D-Prime-in-R.html</link>
        <guid isPermaLink="true">http://ptichko.github.io/2021/02/27/Pipeline-To-Calculate-D-Prime-in-R.html</guid>
        
        
      </item>
    
      <item>
        <title>R Function To Calculate Summary Statistics By Factor Levels</title>
        <description>&lt;h1 id=&quot;r-function-to-calculate-summary-statistics-for-each-combination-of-factor-levels&quot;&gt;R Function To Calculate Summary Statistics For Each Combination of Factor Levels&lt;/h1&gt;

&lt;p&gt;Recently, I created a function called group_by_summary_stats() that quickly calculates basic summary stats  (e.g., N, mean, median, SD, SE, and range) for a single dependent variable for each combination of factor levels.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; The function uses several functions from the dplyr library, specifically the group_by() and summarise() functions, so you’ll need to ensure you’ve installed dplyr. You’ll also need to have the stringr and tibble libraries installed.&lt;/p&gt;

&lt;p&gt;You can download the R function here &lt;a href=&quot;/r/group_by_summary_stats.R&quot; target=&quot;_blank&quot;&gt;&lt;i class=&quot;fa fa-file-text fa-md&quot;&gt;&lt;/i&gt;&lt;/a&gt; and load it into your R session using the source() function:&lt;/p&gt;

&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;source(&quot;group_by_summary_stats.R&quot;)
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;As an example, let’s use the function to calculate some summary statistics on the CO2 dataset included with R. Imagine we ran an experiment with two factors, called Type and Plant, and one dependent variable, called uptake (i.e., the amount of CO2 uptake). I want to quickly summarize the uptake of CO2 consumed for each combination of Plant and Type.
First, let’s instantiate the CO2 dataset in R:&lt;/p&gt;
&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;data(&quot;CO2&quot;)  
df &amp;lt;- CO2  
head(df)
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;We can easily summarize the uptake of CO2 for each combination of the Plant and Type factors by using group_by_summary_stats(). To do so, call group_by_summary_stats(), first passing through the data frame (df), then the name of the dependent variable (uptake), and finally the names of any factors (Type, Plant).
&lt;strong&gt;Note:&lt;/strong&gt; Because of the way dplyr works, you must pass through the names of your variables &lt;strong&gt;without&lt;/strong&gt; quotation marks:&lt;/p&gt;

&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;group_by_summary_stats(df, uptake, Type, Plant)
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;The output of group_by_summary_stats() is a table that summarizes the uptake of CO2 consumed for each combination of the levels within the Type and Plant factors:&lt;/p&gt;

&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;# Groups:   Type [2]
   Type        Plant     N  Mean Median    SD    SE Range    
   &amp;lt;fct&amp;gt;       &amp;lt;ord&amp;gt; &amp;lt;int&amp;gt; &amp;lt;dbl&amp;gt;  &amp;lt;dbl&amp;gt; &amp;lt;dbl&amp;gt; &amp;lt;dbl&amp;gt; &amp;lt;chr&amp;gt;    
 1 Quebec      Qn1       7  33.2   35.3  8.21 3.10  16-39.7  
 2 Quebec      Qn2       7  35.2   40.6 11    4.16  13.6-44.3
 3 Quebec      Qn3       7  37.6   42.1 10.4  3.91  16.2-45.5
 4 Quebec      Qc1       7  30.0   32.5  8.33 3.15  14.2-38.7
 5 Quebec      Qc3       7  32.6   38.1 10.3  3.90  15.1-41.4
 6 Quebec      Qc2       7  32.7   37.5 11.3  4.28  9.3-42.4 
 7 Mississippi Mn3       7  24.1   27.8  6.48 2.45  11.3-28.5
 8 Mississippi Mn2       7  27.3   31.1  7.65 2.89  12-32.4  
 9 Mississippi Mn1       7  26.4   30    8.69 3.29  10.6-35.5
10 Mississippi Mc2       7  12.1   12.5  2.19 0.827 7.7-14.4 
11 Mississippi Mc3       7  17.3   17.9  3.05 1.15  10.6-19.9
12 Mississippi Mc1       7  18     18.9  4.12 1.56  10.5-22.2
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;
</description>
        <pubDate>Wed, 10 Jun 2020 00:00:00 +0000</pubDate>
        <link>http://ptichko.github.io/2020/06/10/R-Function-To-Calculate-Summary-Statistics-By-Factor-Levels.html</link>
        <guid isPermaLink="true">http://ptichko.github.io/2020/06/10/R-Function-To-Calculate-Summary-Statistics-By-Factor-Levels.html</guid>
        
        
      </item>
    
      <item>
        <title>R Function to Reverse Code a Likert Scale</title>
        <description>&lt;h1 id=&quot;an-r-function-to-reverse-code-a-likert-scale&quot;&gt;An R Function to Reverse Code a Likert Scale&lt;/h1&gt;

&lt;p&gt;A recent research project required that I reverse code several items on a questionnaire before scoring the questionnaire. Reverse coding is neccessary when certain items on a questionnare are negatively worded, such that a low score really corresponds to a high score. Ideally, before conducting any sort of analysis, we should ensure that the direction of low and high scores across all questionnaire items is uniform.&lt;/p&gt;

&lt;p&gt;You can download the R function here &lt;a href=&quot;/r/reverseCode.R&quot; target=&quot;_blank&quot;&gt;&lt;i class=&quot;fa fa-file-text fa-md&quot;&gt;&lt;/i&gt;&lt;/a&gt; and load it into your R session using the source() function:&lt;/p&gt;

&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;source(&quot;reverseCode.R&quot;)
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;This R function allows a user to reverse code a scalar or vector of scores using a user-specified, Likert-scale range. By default, the function uses a 1-to-5 Likert scale. (However, this can be changed. See examples below.) First, let’s use the function to reverse code a response of “2” on 1-to-5 Likert Scale (i.e., reverse code “2” to a “4”) using the function’s default settings:&lt;/p&gt;

&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;# reverse code &quot;2&quot; to a &quot;4&quot; on a Likert scale of 1-5
reverseCode(2)

# output
[1] 4
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Now, let’s reverse code a vector of scores, containing scores sequentially increasing from 2 to 7, on a Likert scale of 1-to-7. We can use the “min” and “max” arguments to define the range of our Likert scale. Here, we’ll set the min = 1 and the max = 7 to reflect a Likert scale from 1 to 7. We’ll pass through 2:7, reflecting a vector of scores ranging from 2 to 7:&lt;/p&gt;

&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;# reverse code a vector of scores on a Likert scale of 1-7
reverseCode(2:7, min = 1, max = 7) 

# output
[1] 6 5 4 3 2 1
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;We can also use the function to reverse code binary reponses. For instance, let’s assume we have a binary response of 0 and 1. Let’s reverse code a score of “1” to a “0” by setting the “min” and “max” arguments to 0 and 1, respectively:&lt;/p&gt;

&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt; # reverse code binary response
reverseCode(1, min = 0, max = 1)

# output
[1] 0
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

</description>
        <pubDate>Thu, 04 Jun 2020 00:00:00 +0000</pubDate>
        <link>http://ptichko.github.io/2020/06/04/R-Function-To-Reverse-Code-Likert-Scale.html</link>
        <guid isPermaLink="true">http://ptichko.github.io/2020/06/04/R-Function-To-Reverse-Code-Likert-Scale.html</guid>
        
        
      </item>
    
  </channel>
</rss>
