Artificial Neural Network from Scratch with Python

Artificial Neural Networks are all the rave these days as Artificial Intelligence and Deep Learning take over our lives. I had the pleasure of speaking at the Tampa Bay Artificial Intelligence meetup recently to walk through a demo where we build a simple neural network step by step to predict gender outcome based on height and weight.

I wanted to give a big shoutout to Dan Daniels who is the head organizer of this awesome meetup in the Tampa Bay area. If you are ever around the area, you should make it an effort to join this meetup and come to one of the events.
You can learn more about it here.

The full code from the meetup can be found at the following GitHub link.

Setting up PySpark integration with Jupyter and Python 3 on Ubuntu

This post will focus on configuring an Ubuntu virtual machine to leverage apache spark through Jupyter notebooks with a little bit of help from Anaconda.

To learn how to build an Ubuntu virtual machine, visit our previous post on the topic.

Configuring an Ubuntu virtual environment

Installing Java in Ubuntu

Since Spark runs on the Java Virtual Machine (JVM), the Java Software Development Kit (SDK) is a prerequisite installation for Spark to run on an Ubuntu virtual machine.

Getting ready

In order for Spark to run on a local machine or in a cluster, a minimum requirement of Java 6 is required for installation.

How to do it

Java can be installed on Ubuntu through the terminal application that can be found by searching for the app and then locking it to the launcher on the left-hand side as seen in the following screenshot:

An initial test for Java on the virtual machine will most likely lead to the following output by running the following command at the terminal:

$ java -version

If Java is not currently installed, the output is the following:
The program ‘java’ can be found in the following packages:

    * default-jre
    * gcj-5-jre-headless
    * openjdk-8-jre-headless
    * gcj-4.8-jre-headless
    * gcj-4.9-jre-headless
    * openjdk-9-jre-headless
    Try: sudo apt install

Ubuntu is recommending the sudo apt install method for Java. This method can be performed by executing the following four commands at the terminal:

$ sudo apt-get install software-properties-common
$ sudo add-apt-repository ppa:webupd8team/java
$ sudo apt-get update
$ sudo apt-get install oracle-java8-installer

How it works

After accepting the necessary license agreements for Oracle, a secondary test of Java on the virtual machine should reveal the following output, indicating that a successful installation has occurred for Java 8:

$ java -version
java version "1.8.0_144"
Java(TM) SE Runtime Environment (build 1.8.0_144-b01)
Java HotSpot(TM) 64-Bit Server VM (build 25.144-b01, mixed mode)

Installing Anaconda in Ubuntu

Current versions of Ubuntu Desktop come preinstalled with Python.

Getting ready

While it is convenient that Python comes preinstalled with Ubuntu, the installed version is for Python 2.7 as seen in the following output:

$ python --version
Python 2.7.12

While Python 2 is fine, it is considered legacy Python. Python 2 is facing an end of life date in 2020. It is recommended that all new Python development be performed with Python 3, as will be the case in this blog post. Up until recently, Spark was only available with Python 2. That is no longer the case.

How to do it

A convenient way to install Python 3, as well as many dependencies and libraries, is through Anaconda. Anaconda is a free and open-source distribution of Python. Anaconda manages the installation and maintenances of many of the most common packages used in Python. Anaconda can be downloaded for Linux through the following link:
https://www.anaconda.com/download/
The current version of Anaconda as of this blog post is v4.4 and the current version of Python 3 is v3.6. Once downloaded, the Anaconda installation file can be viewed by accessing the Downloads folder using the following command:

$ cd Downloads/
~/Downloads$ ls
Anaconda3-4.4.0-Linux-x86_64.sh

Once in the Downloads folder, the installation for Anaconda is initiated by executing the following command:

~/Downloads$ bash Anaconda3-4.4.0-Linux-x86_64.sh
Welcome to Anaconda3 4.4.0 (by Continuum Analytics, Inc.)
In order to continue the installation process, please review the license agreement.
Please, press ENTER to continue

Please note the version of Anaconda, as well as any other software installed, may differ as newer updates are released to the public.
During the installation process, it is important to confirm the following conditions:

    1. Anaconda is installed to the /home/username/Anaconda3 location
    2. Anaconda installer prepends Anaconda3 install location to PATH in the /home/username/.bashrc

How it works

Once the installation is complete it may be necessary to restart the Terminal application to confirm that Python 3 is installed as the default Python environment through Anaconda.

$ python --version
Python 3.6.1 :: Anaconda 4.4.0 (64-bit)

The Python 2 version is still available under Linux but will require an explicit call when executing a script, as seen in the following command:

~$ python2 --version
Python 2.7.12

Installing Spark in Ubuntu

Unlike Python, Spark does not come pre-installed on Ubuntu and therefore, will need to be downloaded and installed.

Getting ready

Spark can be downloaded directly from the following website:
https://spark.apache.org/downloads.html
For the purposes of development with deep learning, the following preferences will be selected for Spark:

    1. Spark release: 2.2.0 (Jul 11 2017)
    2. Package type: Pre-built for Apache Hadoop 2.7 and later
    3. Download type: Direct Download

Once the download link is selected, the following file will be downloaded to the Downloads folder in Ubuntu:
spark-2.2.0-bin-hadoop2.7.tgz

How to do it

The file can also be viewed at the terminal level by executing the following commands:

$ cd Downloads/
~/Downloads$ ls
spark-2.2.0-bin-hadoop2.7.tgz

The tgz file can be extracted by executing the following command:

~/Downloads$ tar -zxvf spark-2.2.0-bin-hadoop2.7.tgz

Another look at the Downloads directory shows both the tgz file as well as the extracted folder:

~/Downloads$ ls
spark-2.2.0-bin-hadoop2.7 spark-2.2.0-bin-hadoop2.7.tgz

It is ideal to move the extracted folder from the Downloads folder and to the Home folder by executing the following command:

~/Downloads$ mv spark-2.2.0-bin-hadoop2.7 ~/
~/Downloads$ ls
spark-2.2.0-bin-hadoop2.7.tgz
~/Downloads$ cd
~$ ls
anaconda3 Downloads Pictures Templates
Desktop examples.desktop Public Videos
Documents Music spark-2.2.0-bin-hadoop2.7

Now the spark-2.2.0-bin-hadoop2.7 folder has been moved to the Home folder, which can be viewed when selecting the Files icon on the left-hand side toolbar, as seen in the following screenshot:

How it works

Spark is now installed and can be initiated from the terminal by executing the following command:

~$ cd ~/spark-2.2.0-bin-hadoop2.7/
~/spark-2.2.0-bin-hadoop2.7$ ./bin/pyspark

The output from executing Spark at the command line should look something similar to that shown in the following screenshot:

Two important features to note when initializing Spark is that it is under the Python 3.6.1 | Anaconda 4.4.0 (64-bit)| framework and that the Spark logo is version 2.2.0. A final test to ensure Spark is up and running at the terminal is to execute the following command to ensure that the SparkContext is driving the cluster in the local environment.

>>> sc
<SparkContext master=local[*] appName=PySparkShell>

Congratulations! Spark is successfully installed on the local Ubuntu virtual machine. But not everything is complete. Spark development is best when Spark code can be executed within a Jupyter notebook, especially for deep learning. Thankfully, Jupyter was installed with the Anaconda distribution performed earlier in this chapter.

To learn more about Jupyter notebooks and their integration within Python, visit the following website: http://jupyter.org

Configuring PySpark within Jupyter notebooks

When learning Python for the first time, it is useful to use Jupyter notebooks as an interactive developing environment (IDE). This is one of the main reasons why Anaconda is so powerful. It fully integrates all of the dependencies between Python and Jupyter notebooks.

Getting ready

The same can be done with PySpark and Jupyter notebooks. While Spark is written in Scala, PySpark allows for the translation of code to occur within Python instead.

How to do it

PySpark is not configured to work within Jupyter notebooks by default, but a slight tweak of the .bashrc script can remedy this issue. The .bashrc script can be accessed by executing the following command:

$ nano .bashrc

Scrolling all the way to the end of the script should reveal the last command modified, which should be the PATH set by Anaconda during the installation earlier in this chapter.

# added by Anaconda3 4.4.0 installer
export PATH="/home/asherif844/anaconda3/bin:$PATH"

Underneath the path added by the Anaconda installer can include a custom function that helps communicate the Spark installation with the Jupyter notebook installation from Anaconda3. For the purposes of this chapter and remaining chapters, that function will be called sparknotebook.

function sparknotebook()
{
export SPARK_HOME=/home/asherif844/spark-2.2.0-bin-hadoop2.7
export PYSPARK_PYTHON=python3
export PYSPARK_DRIVER_PYTHON=jupyter
export PYSPARK_DRIVER_PYTHON_OPTS="notebook"
$SPARK_HOME/bin/pyspark
}

The new .bashrc script should look like the following once updated and saved:

Once saved and exited from the .bashrc file, it is recommended to communicate that the .bashrc file has been updated by executing the following command and restarting the terminal application:

$ source .bashrc

The sparknotebook function can now be accessed directly from the terminal by executing the following command:

$ sparknotebook

The function should then initiate a brand new Jupyter notebook session through the default web browser.

How it works

A new Python script within Jupyter notebooks with an ipynb extension can be created by clicking on the New button on the right-hand side and by selecting Python 3 under Notebook: as seen in the following screenshot:

Once again, just as was done at the terminal level for Spark, a simple script of sc will be executed within the notebook to confirm that Spark is up and running through Jupyter.

Ideally, the Version, Master, and AppName should match identically to the earlier output when sc was executed at the terminal. As such, if this is the case then PySpark has been successfully installed and configured to work with Jupyter notebooks.

Summary

Congratulations once again! This time everything is complete. The focus of this post was to create an optimal work environment within an Ubuntu virtual machine to help with developing deep learning models on top of Spark in later chapters. That required installing many of the dependencies needed for Spark to work within the virtual machine as well as configuring the virtual machine to assist Jupyter notebooks to leverage Spark code through PySpark.

Setting up an Ubuntu Virtual Sandbox

An Ubuntu virtual environment can come in handy for many applications. First and foremost, it’s free! There are several advantages to using Ubuntu as the go-to virtual machine, not the least of which is cost. Since it is based on open-source software, Ubuntu operating systems are free to use and do not require licensing. Second, it’s very user-friendly to get started with. This blog post will help with getting started whether the host environment is Windows or Mac.

Downloading an Ubuntu Desktop image

Virtual environments provide an optimal development workspace by isolating the relationship to the physical or host machine. Developers may be using all types of machines for their host environments such as a MacBook running MacOS or a Microsoft Surface running Windows; however, to ensure consistency within the output of the code executed, a virtual environment within Ubuntu Desktop will be deployed that can be used and shared amongst a wide variety of host platforms.

In order to create a virtual machine of Ubuntu Desktop, we first need to download the file from the official website:

https://www.ubuntu.com/download/desktop.

There are some minimum recommendations required for downloading the image file, which is in an iso format:

    Minimum of 2 GHz dual-core processor
    Minimum of 2 GB system memory
    Minimum of 25 GB of free hard drive space

As of the writing of this post, Ubuntu Desktop 16.04.3 is available for download. Once the download is complete, you should have the following file saved to a folder that is accessible to you:

ubuntu-16.04.3-desktop-amd64.iso

Installing and configuring Ubuntu Desktop on a virtual machine

There are several options for desktop virtualization software depending on whether you are currently developing on a macOS or Windows.

Configuring Ubuntu with VMWare Fusion on macOS

There are two common software applications for virtualization if you are using macOS: VMWare Fusion and Parallels. This section will focus on building a virtual Ubuntu machine with VMWare Fusion. Once Fusion is up and running we can begin our configuration process by clicking on the + button on the upper left-hand side and select New… as seen in the following screenshot:

Once the selection has been made, the next step is to select the option to Install from Disk or Image as seen in the following screenshot:

To learn more about VMWare and Parallels and decide with which program is a better fit, visit the following websites:

  • https://www.vmware.com/products/fusion.html to download and install VMWare Fusion for Mac
  • https://parallels.com to download and install Parallels Desktop for Mac.
  • Continuing onto the next step involves selecting the operating system iso file that was downloaded from the Ubuntu Desktop website, as seen in the following screenshot:

    The next step will ask whether or not to choose Linux Easy Install. It is recommended to do so as well as incorporate a username/password combination for the Ubuntu environment as seen in the following screenshot:

    The configuration process is almost complete. A Virtual Machine Summary is displayed with the option to Customize Settings to increase the Memory and Hard Disk as seen in the following screenshot:

    20 GB hard disk space is sufficient for the virtual machine; however, bumping up the memory to either 2 GB or even 4 GB will assist with the performance of the virtual machine when executing spark code in later chapters. This can be updated by selecting Processors and Memory under the Settings of the virtual machine and increasing the Memory to the desired amount as seen in the following screenshot:

    All that is remaining is to start up the virtual machine for the first time which initiates the installation process of the system onto the virtual machine. Once all the setup is complete and the user has logged in, the Ubuntu virtual machine should be available for development as seen in the following screenshot:

    Configuring Ubuntu with Oracle VirtualBox on Windows

    There are several options to virtualize systems within Windows. Oracle VirtualBox provides a straightforward process to get an Ubuntu Desktop virtual machine up and running on top of a Windows environment.
    Once VirtualBox Manager is initiated, a new virtual machine is created by selecting the New icon and specifying the Name, Type, and Version of the machine as seen in the following screenshot:
    To learn more about Oracle VirtualBox and decide whether or not it is a good fit, visit the following website: https://www.virtualbox.org/wiki/Downloads and select Windows hosts to begin the download process.

    By selecting Expert Mode several of the configuration steps are consolidated as seen in the following screenshot:

    Ideal memory size should be set to at least 2048 MB or preferably 4096 MB depending on the resources available on the host machine. Additionally, an optimal hard disk size for an Ubuntu virtual machine to perform deep learning algorithms is 20 GB as seen in the following screenshot:

    The final step is to point the virtual machine manager to the start-up disk location where the ubuntu iso file was downloaded to and then Start the creation process, as seen in the following screenshot:

    After allotting some time for the installation, the virtual machine is complete and ready for development by selecting the Start icon as seen in the following screenshot:

    Happy Virtualizing!

    Analyzing the Inauguration Speech of President Donald Trump with Python and SPSS


    I think we can all agree that President Donald Trump certainly has a way with words. Quite often it elicits a strong reaction from both his strongest supporters and harshest critics. I decided to take a closer look at his inauguration speech from January 2017 and break down the sentiments behind the words free from any external bias with a little bit of text analytics.

    Before we can do any analysis, we first have to get the full text of his speech in a usable format. I found a copy of the entire speech available from CNN:

    http://www.cnn.com/2017/01/20/politics/trump-inaugural-address/

    Python has many great libraries for web scraping text off of the web. We will use python v3.6 inside of a Jupyter Notebook to retrieve our results. A word of caution to always read the fine print of a website in regarding scraping data off of their site.

    When we view our data retrieval from the variable htmls, we find that it is littered with web syntax that is unnecessary for our analysis as seen in the following screenshot.

    We can see the text from the inauguration, but it is squeezed between

    tags that have a class of “zn-body__paragraph”. Thankfully, we can use the BeautifulSoup library within python to only extract the necessary text from the tags and print it out as seen in the following code.

    The text now looks much more readable and available in a format for us to begin our text analysis. The final set of code we will execute in python will be to export the text, line by line, to a MS Excel file that can be read by SPSS.

    We can now view our Excel file, Speech.xlsx, in our local directory to confirm that the text from the speech successfully exported.

    As we can see, our speech exported line by line each to a new cell within Excel exactly as we specified it in python with a tab titled Speech and a column header also titled Speech. We are now ready to begin our text analysis with IBM SPSS.

    We will be using SPSS Modeler v18.0. First we will need to create a new stream and use Excel as our data source from the Sources tab, as seen in the following screenshot.

    We can then edit the data source and direct it to our Excel output file from python. To confirm that the file was uploaded to SPSS, we can preview the first ten records as seen in the following screenshot.

    Next, we will connect a Text Mining node from the IBM Text Analytics tab to the Excel data source.

    We can then configure the Text Mining node to point to the Speech column as the Text field from the Excel file under the Fields tab as seen in the following screenshot.

    We also want to use a specific model for sentiment analysis on our text and that requires selecting the Model tab and loading a specific Text Analysis Package.

    We wish to load the Sentiments package under the English package as seen in the following screenshot.

    Once the package has been selected, we can click on the Run button to execute the model.

    When the model is executed, we can immediately view the different sentiment types by frequency.

    Each sentence of the speech is being classified as a document or a Doc. About 72% of the documents are not falling under any specific category, this is not uncommon as there are words that do not associate with any specific type of sentiment. 12% of the sentences are falling under a positive sentiment category and 5% of the sentences are falling under a negative sentiment category. If we wish to further investigate the negative documents, we would do so by highlighting the sentiment type and selecting the Display icon as seen in the following screenshot.

    Once selected, we can view each of the 10 documents that are associated with Negative sentiments. Below are 2 of the 10 documents.

    For the first sentence, the model associated the word great as a positive sentiment and the word restore as a negative sentiment. Therefore, a sentence can have more than sentiment category associated with it. The second sentence associated both too long and closed as words affiliated with negative sentiments.

    In addition to sentimentality, SPSS modeler has the ability to group words of sentences into broader concepts. We can perform this function by clicking on the build icon, as seen in the following screenshot.

    We can see the top category is Americans, followed by nation, and business management. This is to be expected that President Trump would focus on his business management skills in his first speech to the country as President. If we expand the Americans category, we can view the different ways that the word is used within a sentence.

    I hope you enjoyed this opportunity to dig through the Inauguration speech of Donald J. Trump and the different ways we could analyze his speech and categorize the words and sentences using IBM SPSS Modeler.

    Flatten out Ranges in Python


    I frequently get requests from Excel users to modify a spreadsheet table that was formatted in a non-functional manner. Many of these manipulations occur with number ranges or a start and end date. It may be perceived fine for one user but may prove to be unusable for a more advanced number-cruncher who needs the data in a different manner. One such manipulation revolves around ranges. You have a start date and an end date but you don’t have all the numbers in between. The Excel user may want to see all the dates in between to develop a time series chart.

    This manipulation can be easily done using the range function as well as a for loop. Since I’m a comic book geek, I decided to use an example of how to flatten out dates related to age of comics by decades.

    Please note this script is utilizing Python 3.6

    ==========================================================================
    ==========================================================================

    First: Import the pandas module to build out a dataframe with the fields that we wish to manipulate

    In [1]:
    import pandas as pd
    

    Next: build out the dataset as a dictionary

    In [2]:
    raw_data = {
        'start_year' : [1930, 1940, 1950, 1960, 1970, 1980, 1990, 2000],
        'end_year'   : [1939, 1949, 1959, 1969, 1979, 1989, 1999, 2009],
        'age'        : ['Golden Age','WWII', 'Silver Age', 'Return of the Super Hero', 'The Age of Marvel', 'The Age of New Publishers', 'The Age of Fluff', 'The Age of Independents' ]   
    }
    

    Preview the Data

    In [3]:
    raw_data
    
    Out[3]:
    {'age': ['Golden Age',
      'WWII',
      'Silver Age',
      'Return of the Super Hero',
      'The Age of Marvel',
      'The Age of New Publishers',
      'The Age of Fluff',
      'The Age of Independents'],
     'end_year': [1939, 1949, 1959, 1969, 1979, 1989, 1999, 2009],
     'start_year': [1930, 1940, 1950, 1960, 1970, 1980, 1990, 2000]}

    Convert dictionary to DataFrame using pandas

    In [4]:
    df = pd.DataFrame(raw_data, columns = ['start_year', 'end_year', 'age'], index=None)
    

    Preview the results of the DataFrame

    In [5]:
    df
    
    Out[5]:
    start_year end_year age
    0 1930 1939 Golden Age
    1 1940 1949 WWII
    2 1950 1959 Silver Age
    3 1960 1969 Return of the Super Hero
    4 1970 1979 The Age of Marvel
    5 1980 1989 The Age of New Publishers
    6 1990 1999 The Age of Fluff
    7 2000 2009 The Age of Independents

    Apply a loop where we convert the ranges into lists and append the name of the range to each list

    We continue to do this for each list of date ranges

    In [6]:
    df_combined = []
    for i in range(0,len(df)):
        range_year = list(range(df['start_year'][i],df['end_year'][i]+1))
        df_1 = pd.DataFrame(range_year,columns = ['Year'])
        df_1['age']=df['age'][i]
        df_combined.append(df_1)
    df_combined = pd.concat(df_combined, axis=0)
    

    We preview the newly combined DataFrame

    In [7]:
    df_combined
    
    Out[7]:
    Year age
    0 1930 Golden Age
    1 1931 Golden Age
    2 1932 Golden Age
    3 1933 Golden Age
    4 1934 Golden Age
    5 1935 Golden Age
    6 1936 Golden Age
    7 1937 Golden Age
    8 1938 Golden Age
    9 1939 Golden Age
    0 1940 WWII
    1 1941 WWII
    2 1942 WWII
    3 1943 WWII
    4 1944 WWII
    5 1945 WWII
    6 1946 WWII
    7 1947 WWII
    8 1948 WWII
    9 1949 WWII
    0 1950 Silver Age
    1 1951 Silver Age
    2 1952 Silver Age
    3 1953 Silver Age
    4 1954 Silver Age
    5 1955 Silver Age
    6 1956 Silver Age
    7 1957 Silver Age
    8 1958 Silver Age
    9 1959 Silver Age
    0 1980 The Age of New Publishers
    1 1981 The Age of New Publishers
    2 1982 The Age of New Publishers
    3 1983 The Age of New Publishers
    4 1984 The Age of New Publishers
    5 1985 The Age of New Publishers
    6 1986 The Age of New Publishers
    7 1987 The Age of New Publishers
    8 1988 The Age of New Publishers
    9 1989 The Age of New Publishers
    0 1990 The Age of Fluff
    1 1991 The Age of Fluff
    2 1992 The Age of Fluff
    3 1993 The Age of Fluff
    4 1994 The Age of Fluff
    5 1995 The Age of Fluff
    6 1996 The Age of Fluff
    7 1997 The Age of Fluff
    8 1998 The Age of Fluff
    9 1999 The Age of Fluff
    0 2000 The Age of Independents
    1 2001 The Age of Independents
    2 2002 The Age of Independents
    3 2003 The Age of Independents
    4 2004 The Age of Independents
    5 2005 The Age of Independents
    6 2006 The Age of Independents
    7 2007 The Age of Independents
    8 2008 The Age of Independents
    9 2009 The Age of Independents

    80 rows × 2 columns

    In [8]:
    df_combined.to_csv('flat file.csv', index='False')
    

    Click here, to learn more about ranges in python

    Happy New Prime Year: Determine Prime Years with Python

    It’s 2017! So, happy new year! Not only is 2017 a new year but it is also a prime year! What does that mean? Well, that means that 2017 is only divisible by itself and the number 1. Hopefully, this means that 2017 will be less divisive than 2016!

    I’ve built a pretty straightforward function in python that pulls every prime number from 2 to 3000:

    def return_prime_number():
        list_of_prime_numbers = []
        for number in range(2,3000):        
            for i in range(2,number):
                if number % i ==0:
                    break
            else:
                list_of_prime_numbers.append(number)

        print(list_of_prime_numbers)
    return_prime_number()

    We can then see the output of the array of all prime numbers between 2 and 3000

    [2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 53, 59, 61, 67, 71, 73, 79, 83, 89, 97, 101, 103, 107, 109, 113, 127, 131, 137, 139, 149, 151, 157, 163, 167, 173, 179, 181, 191, 193, 197, 199, 211, 223, 227, 229, 233, 239, 241, 251, 257, 263, 269, 271, 277, 281, 283, 293, 307, 311, 313, 317, 331, 337, 347, 349, 353, 359, 367, 373, 379, 383, 389, 397, 401, 409, 419, 421, 431, 433, 439, 443, 449, 457, 461, 463, 467, 479, 487, 491, 499, 503, 509, 521, 523, 541, 547, 557, 563, 569, 571, 577, 587, 593, 599, 601, 607, 613, 617, 619, 631, 641, 643, 647, 653, 659, 661, 673, 677, 683, 691, 701, 709, 719, 727, 733, 739, 743, 751, 757, 761, 769, 773, 787, 797, 809, 811, 821, 823, 827, 829, 839, 853, 857, 859, 863, 877, 881, 883, 887, 907, 911, 919, 929, 937, 941, 947, 953, 967, 971, 977, 983, 991, 997, 1009, 1013, 1019, 1021, 1031, 1033, 1039, 1049, 1051, 1061, 1063, 1069, 1087, 1091, 1093, 1097, 1103, 1109, 1117, 1123, 1129, 1151, 1153, 1163, 1171, 1181, 1187, 1193, 1201, 1213, 1217, 1223, 1229, 1231, 1237, 1249, 1259, 1277, 1279, 1283, 1289, 1291, 1297, 1301, 1303, 1307, 1319, 1321, 1327, 1361, 1367, 1373, 1381, 1399, 1409, 1423, 1427, 1429, 1433, 1439, 1447, 1451, 1453, 1459, 1471, 1481, 1483, 1487, 1489, 1493, 1499, 1511, 1523, 1531, 1543, 1549, 1553, 1559, 1567, 1571, 1579, 1583, 1597, 1601, 1607, 1609, 1613, 1619, 1621, 1627, 1637, 1657, 1663, 1667, 1669, 1693, 1697, 1699, 1709, 1721, 1723, 1733, 1741, 1747, 1753, 1759, 1777, 1783, 1787, 1789, 1801, 1811, 1823, 1831, 1847, 1861, 1867, 1871, 1873, 1877, 1879, 1889, 1901, 1907, 1913, 1931, 1933, 1949, 1951, 1973, 1979, 1987, 1993, 1997, 1999, 2003, 2011, 2017, 2027, 2029, 2039, 2053, 2063, 2069, 2081, 2083, 2087, 2089, 2099, 2111, 2113, 2129, 2131, 2137, 2141, 2143, 2153, 2161, 2179, 2203, 2207, 2213, 2221, 2237, 2239, 2243, 2251, 2267, 2269, 2273, 2281, 2287, 2293, 2297, 2309, 2311, 2333, 2339, 2341, 2347, 2351, 2357, 2371, 2377, 2381, 2383, 2389, 2393, 2399, 2411, 2417, 2423, 2437, 2441, 2447, 2459, 2467, 2473, 2477, 2503, 2521, 2531, 2539, 2543, 2549, 2551, 2557, 2579, 2591, 2593, 2609, 2617, 2621, 2633, 2647, 2657, 2659, 2663, 2671, 2677, 2683, 2687, 2689, 2693, 2699, 2707, 2711, 2713, 2719, 2729, 2731, 2741, 2749, 2753, 2767, 2777, 2789, 2791, 2797, 2801, 2803, 2819, 2833, 2837, 2843, 2851, 2857, 2861, 2879, 2887, 2897, 2903, 2909, 2917, 2927, 2939, 2953, 2957, 2963, 2969, 2971, 2999]

    There you have it. Pretty straightforward. We see that our last Prime Number Year was in 2011 and our next Prime Number Year will be in 2027, a whopping 10 years from now.