Author: ishanajay2012

  • The Simple Guide to AI and Machine Learning With Python

    In this guide, you will learn how to create an AI that recognizes handwriting with Python using Dense neural networks and the MNIST dataset. This guide will use TensorFlow to train your AI, and basic knowledge of linear algebra used in AI is strongly recommended. You can refer to this guide to understand the linear algebra used in AI. In the next part, we upgrade the neural network’s accuracy using convolutional neural networks.

    Prerequisites

    To do this, you will first need to install Python and add Pip to the .bashrc file for Linux or the Environment Variables in Windows or Mac. Then, run the command below to install the required libraries:

    BAT (Batchfile)
    pip install "tensorflow<2.11"
    pip install pandas openpyxl numpy matplotlib

    Note: If installing TensorFlow does not work, you can run pip install tensorflow. This will function like normal, but it will not be able to utilize your GPU.

    Writing The Code

    In a new Python file, we will first import the dataset and import the libraries needed:

    Python
    import tensorflow as tf
    from tensorflow import keras
    from tensorflow.keras.datasets import mnist
    from tensorflow.keras import backend as K
    import numpy as np
    import matplotlib.pyplot as plt
    from tensorflow.keras.models import Sequential 
    from tensorflow.keras.layers import Dense, Flatten

    We then define some functions that will help us visualize the data better later on in the code. I will not go over how they work, but they are not a necessity, just there to help us visualize the data better:

    Python
    def show_min_max(array, i):
      random_image = array[i]
      print(random_image.min(), random_image.max())
    
    def plot_image(array, i, labels):
      plt.imshow(np.squeeze(array[i]))
      plt.title(" Digit " + str(labels[i]))
      plt.xticks([])
      plt.yticks([])
      plt.show()
      
    def predict_image(model, x):
      x = x.astype('float32')
      x = x / 255.0
    
      x = np.expand_dims(x, axis=0)
    
      image_predict = model.predict(x, verbose=0)
      print("Predicted Label: ", np.argmax(image_predict))
    
      plt.imshow(np.squeeze(x))
      plt.xticks([])
      plt.yticks([])
      plt.show()
      return image_predict
      
    
    def plot_value_array(predictions_array, true_label, h):
      plt.grid(False)
      plt.xticks(range(10))
      plt.yticks([])
      thisplot = plt.bar(range(10), predictions_array[0], color="#777777")
      plt.ylim([(-1*h), h])
      predicted_label = np.argmax(predictions_array)
      thisplot[predicted_label].set_color('red')
      thisplot[true_label].set_color('blue')
      plt.show()

    In the MNIST Data set (the dataset that we will be using), there are 60,000 training images and 10,000 test images. Each image is 28 x 28 pixels. There are 10 possible outputs (or to be more technical, output classes), and there is one color channel, meaning that each image is stored as a 28 x 28 grid of numbers between 0 and 255. It also means that each image is monochrome.

    We can use this data to set some variables:

    Python
    img_rows = 28 # Rows in each image
    img_cols = 28 # Columns in each image
    num_classes = 10 # Output Classes

    Now, we will load the train images and labels and load in another set of images and labels used for evaluating the model’s performance after we train it (these are called test images/labels).

    What Are Images and Labels?

    These can also be data and labels. The data is the context that the computer is given, while the labels are the correct answer to predicting based on data. Most of the time, the model tries predicting labels based on the data it is given.

    Python
    (train_images, train_labels), (test_images, test_labels) = mnist.load_data()

    The next step is not required, and we don’t make use of it throughout the code, however it is recommended, especially if you are using a Python notebook.

    The next step is to create a duplicate, untouched version of the train and test data as a backup:

    Python
    (train_images_backup, train_labels_backup), (test_images_backup, test_labels_backup) = mnist.load_data()

    Now, we test to see if we loaded the data correctly:

    Python
    print((train_images.shape, test_images.shape))
    Expected Output
    ((60000, 28, 28), (10000, 28, 28))
    Why Are They Those Shapes?

    The images are 28×28, so that explains the last two dimensions in the shape. Because the data is stored as a long matrix of pixel values (this is not readable to our neural network, by the way; we will fix this later), we do not need to add any more dimensions. If you remember what I said earlier, you will know that there are 60000 training images and 10000 testing images, so that explains the first dimension in the tensor.

    The whole purpose of this tutorial is to get you comfortable with machine learning, which is why I am going to let you in on the fact that data can be formatted one way or another, and it is up to you to understand how to get your datasets to work with your model.

    Because the MNIST dataset is made for this purpose, it is already ready-to-use and little to no reshaping or reformatting has to go into this.

    However, you might come across data you need to use for your model that is not that well formatted or ready for your machine learning model or scenario.

    It is important to develop this skill, as in your machine learning career, you are going to have to deal with different types of data.

    Now, let’s do the only reshaping we really need to do, reshaping the data to fit in out neural network input layer by converting it from a long matrix of pixel values to readable images. We can do this by adding the number of color channels as a dimension, and because the image is monochrome, we only need to add one as a dimension.

    What is a Shape in Neural Networks?

    A shape is the size of the linear algebra object you want to represent in code. I provide an extremely simple explanation of this here.

    What is a Neural Network?

    A neural network is a type of AI computers use to think and learn like a human. The type of neural network that we will be using today, sequential, models the human brain, consisting of layers of neurons that pass computed data to the next layer, which passes it’s computed data to the next layer, and so on, until it finally passes through the output layer, which will narrow the possible results down to however many output classes (desired amount of possible outcomes) you want. This whole layer cycle begins at the input layer, which will take the shape and pass it through to the rest of the layers.

    Python
    train_images = train_images.reshape(train_images.shape[0], img_rows, img_cols, 1)
    test_images = test_images.reshape(test_images.shape[0], img_rows, img_cols, 1)
    # Adding print statements to see the new shapes.
    print((train_images.shape, test_images.shape))
    Expected Output
    ((60000, 28, 28, 1), (10000, 28, 28, 1))

    Now, we define the input shape, to be used when we define settings for the model.

    What is an Input Shape?

    An input shape defines the only shape that the input layer is capable of taking into the neural network.

    We will begin data cleaning now, or making the data easier to process by the model.

    First, let’s plot the digit 5 as represented in the MNIST dataset:

    Python
    plot_image(train_images, 100, train_labels)

    This should output the following plot:

    Now, let’s see what the numbers representing pixel intensity look like inside the image:

    Python
    out = ""
    for i in range(28):
      for j in range(28):
        f = int(train_images[100][i][j][0])
        s = "{:3d}".format(f)
        out += (str(s)+" ")
      print(out)
      out = ""
    Expected Output (Lines Providing no Useful Data are Blurred)
      0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0 
      0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0 
      0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0 
      0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0 
      0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0 
      0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0 
      0   0   0   0   0   0   0   0   0   0   0   0   0   2  18  46 136 136 244 255 241 103   0   0   0   0   0   0 
      0   0   0   0   0   0   0   0   0   0   0  15  94 163 253 253 253 253 238 218 204  35   0   0   0   0   0   0 
      0   0   0   0   0   0   0   0   0   0   0 131 253 253 253 253 237 200  57   0   0   0   0   0   0   0   0   0 
      0   0   0   0   0   0   0   0   0   0 155 246 253 247 108  65  45   0   0   0   0   0   0   0   0   0   0   0 
      0   0   0   0   0   0   0   0   0   0 207 253 253 230   0   0   0   0   0   0   0   0   0   0   0   0   0   0 
      0   0   0   0   0   0   0   0   0   0 157 253 253 125   0   0   0   0   0   0   0   0   0   0   0   0   0   0 
      0   0   0   0   0   0   0   0   0   0  89 253 250  57   0   0   0   0   0   0   0   0   0   0   0   0   0   0 
      0   0   0   0   0   0   0   0   0   0  89 253 247   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0 
      0   0   0   0   0   0   0   0   0   0  89 253 247   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0 
      0   0   0   0   0   0   0   0   0   0  89 253 247   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0 
      0   0   0   0   0   0   0   0   0   0  21 231 249  34   0   0   0   0   0   0   0   0   0   0   0   0   0   0 
      0   0   0   0   0   0   0   0   0   0   0 225 253 231 213 213 123  16   0   0   0   0   0   0   0   0   0   0 
      0   0   0   0   0   0   0   0   0   0   0 172 253 253 253 253 253 190  63   0   0   0   0   0   0   0   0   0 
      0   0   0   0   0   0   0   0   0   0   0   2 116  72 124 209 253 253 141   0   0   0   0   0   0   0   0   0 
      0   0   0   0   0   0   0   0   0   0   0   0   0   0   0  25 219 253 206   3   0   0   0   0   0   0   0   0 
      0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0 104 246 253   5   0   0   0   0   0   0   0   0 
      0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0 213 253   5   0   0   0   0   0   0   0   0 
      0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0  26 226 253   5   0   0   0   0   0   0   0   0 
      0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0 132 253 209   3   0   0   0   0   0   0   0   0 
      0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0  78 253  86   0   0   0   0   0   0   0   0   0 
      0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0 
      0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0 

    In order to help us visualize the data to another degree, let’s run the function below to show what the minimum and maximum values of the data are (the largest and smallest value in the data):

    Python
    show_min_max(train_images, 100)
    Expected Output
    0 255

    Now we can start the actual data cleaning. As you saw above, the data in the image is represented as an integer between zero and 255. While the network could learn on this data, let’s make it easier for the network by representing these values as a floating point number between zero and one. This keeps the numbers small for the neural network.

    Sign up for our newsletter!

    ← Back

    Thank you for your response. ✨

    First thing’s first, let’s convert the data to a floating-point number:

    Python
    train_images = train_images.astype('float32')
    test_images = test_images.astype('float32')

    Now that the data can be stored as a floating point number, we need to normalize the data all the way down to 0 to 1, not 0 to 255. We can achieve this by using some division:

    Python
    train_images /= 255 
    test_images /=255

    Now we can see if any changes were made to the image:

    Python
    plot_image(train_images, 100, train_labels)

    The code above should output:

    As you could see, no changes were made to the image. Now we will run the code below to check if the data was actually normalized:

    Python
    out = ""
    for i in range(28):
      for j in range(28):
        f = (train_images[100][i][j][0])
        s = "{:0.1f}".format(f)
        out += (str(s)+" ")
      print(out)
      out = ""
    Expected Output (Lines Providing no Useful Data are Blurred)
    0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 
    0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 
    0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 
    0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 
    0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 
    0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 
    0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.1 0.2 0.5 0.5 1.0 1.0 0.9 0.4 0.0 0.0 0.0 0.0 0.0 0.0 
    0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.1 0.4 0.6 1.0 1.0 1.0 1.0 0.9 0.9 0.8 0.1 0.0 0.0 0.0 0.0 0.0 0.0 
    0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.5 1.0 1.0 1.0 1.0 0.9 0.8 0.2 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 
    0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.6 1.0 1.0 1.0 0.4 0.3 0.2 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 
    0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.8 1.0 1.0 0.9 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 
    0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.6 1.0 1.0 0.5 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 
    0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.3 1.0 1.0 0.2 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 
    0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.3 1.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 
    0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.3 1.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 
    0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.3 1.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 
    0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.1 0.9 1.0 0.1 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 
    0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.9 1.0 0.9 0.8 0.8 0.5 0.1 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 
    0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.7 1.0 1.0 1.0 1.0 1.0 0.7 0.2 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 
    0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.5 0.3 0.5 0.8 1.0 1.0 0.6 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 
    0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.1 0.9 1.0 0.8 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 
    0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.4 1.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 
    0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.8 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 
    0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.1 0.9 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 
    0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.5 1.0 0.8 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 
    0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.3 1.0 0.3 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 
    0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 
    0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0

    As you can see, the image is not affected, but the data is easier for the neural network to deal with.

    If we don’t want to have to stifle through all those numbers but still check to see if we have cleaned the data correctly, let’s look at the minimum and maximum values of the data:

    Python
    print("The min and max are: ")
    show_min_max(train_images, 100)
    Expected Output (Lines Providing no Useful Data are Blurred)
    The min and max are: 
    0.0 1.0

    We could start building the model now, but there is a problem we need to address. MNIST’s labels are simply the digits 1 to 9 because, well, the entire dataset is just handwritten digits 1 to 9. However, due to the nature of neural networks, they inherently believe that the data is ordered (i.e. 1 is more similar to 2 than 7, when in reality 7 looks more like the number 1, but they do this because from a mathematical perspective 1 is more similar to 2), which is wrong. To do this, convert the data to a categorical format, one that Keras won’t think is ordered, making it view each number independently:

    Python
    train_labels = keras.utils.to_categorical(train_labels, num_classes) 
    test_labels= keras.utils.to_categorical(test_labels, num_classes)

    This is also called One-Hot Encoding.

    Now, we can finally start building our model.

    Training done on datasets are called epochs. Each epoch is one complete pass over the entire dataset. Generally speaking, most epochs yeild more accurate results, but take a longer time to train. Finding the balance between reasonable time and good results is important when developing an AI model.

    For now, we are just going to be training the model with ten epochs, but this number can be adjusted as you wish.

    Python
    epochs = 10

    In this tutorial, we will be making a sequential model. In the future, you may need to make other types of models.

    Defining our model:

    Python
    model = Sequential()

    Now, we need to add the first layer (also called the input layer, as it takes input):

    Python
    model.add(Flatten(input_shape= (28,28,1)))

    That layer is a flatten layer. It will convert the data into a long string of numbers, but in a way that the neural network can understand. We prepared the data for this earlier. Because it does not know what shape the data is stored as, we have to specify it in the input_shape parameter.

    Now, we can add the layers needed.

    We will add a Dense layer below, which will perform predictions on the data. We can configure a lot here, and in the future as a machine learning engineer, you will need to learn what the proper configurations for your scenario are. For now, we are going to use the activation function ReLU and put 16 neurons in this layer.

    What is ReLU?

    ReLU is an activation function that stands for Rectified Linear Unit. It uses the property of nonlinearity to properly rectify data sent through it. For example, if a negative number is passed through it, it will return 0.

    Python
    model.add(Dense(units=16, activation='relu'))

    Finally, we will add the output layer. It’s job, as implied in the name, is to shrink the amount of possible outputs down to the number of output classes specified. Each output from this layer represents the AI’s guess on how likely one of its guesses is to be correct (in computer vision terms, this is known as the confidence).

    We will make sure that the neural network shrinks this down to ten output classes (as the possible outputs are the digits zero to nine) by putting ten neurons into it (as you probably guessed, one neuron will output its guess on how likely it is that it’s correct), and by using the Softmax activation function to do so.

    What is Softmax?

    Softmax is an activation function that distributes the outputs such that they all sum to one. We are using it as the activation function for the final layer because our neural network is outputting something that could be interpreted as probability distribution.

    Python
    model.add(Dense(units=10, activation='softmax'))

    Now, we can see an overview of what our model looks like:

    Python
    model.summary()
    Expected Output (Lines Providing no Useful Data are Blurred)
    Model: "sequential"
    _________________________________________________________________
     Layer (type)                Output Shape              Param #   
    =================================================================
     flatten (Flatten)           (None, 784)               0         
                                                                     
     dense (Dense)               (None, 16)                12560     
                                                                     
     dense_1 (Dense)             (None, 10)                170       
                                                                     
    =================================================================
    Total params: 12,730
    Trainable params: 12,730
    Non-trainable params: 0
    _________________________________________________________________

    As you saw above, our model is sequential, has three layers that reshape the data, and already has 12,730 parameters to train. This means that the network is going to change 12,730 numbers in a single epoch. This should be enough to correctly identify a hand-drawn number.

    Now, we have to compile the network and provide data to TensorFlow such that it compiles in the way that we want it to.

    What do All the Arguments Mean?
    • The Optimizer is an algorithm that, as you probably guessed from the name, optimizes some value. Optimizing a value can mean either making it as big as possible or as small as possible. In a neural network, we want to optimize the loss (or how many times the neural network got the data wrong) by making it as small as possible. The optimizer is the function that does all this math behind the scenes. There are many functions for this, each with their own strengths or weaknesses. We will use Adam, a popular one for image recognition as it is fast and lightweight.
    • The Loss is the difference between a model’s prediction and the actual label. There are many ways to calculate this, which is why it is important to choose the right one. The loss function you need varies based on the how your neural network’s output should look like. For now, we should just use Categorical Cross Entropy.
    • The Metrics. For convenience purposes and to better visualize the data, TensorFlow allows the developer to choose which additional metrics it should show to supplement the metrics already shown during training. Accuracy, or what percent of input images the model guessed correctly, is one metric that can be visualized during training. It is similar to loss, but is calculated in a separate way, so accuracy and loss won’t necessarily add up to 100% or be direct inverts of each other.
    Python
    model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])

    Once our model is compiled, we can fit the model to the training data that we prepared. We will use the actual training data to train the model in a way that lets it recognize numbers.

    The train_images is the dataset that will be the inputs given to the model, while the train_labels will be like the answer to the questions, helping us keep track of if the network’s guess was correct or not. The epochs will be the amount of epochs it needs to run. This will be set to the variable we defined earlier.

    Python
    model.fit(train_images, train_labels, epochs=epochs, shuffle=True)
    Expected Output (may vary)
    Epoch 1/10
    1875/1875 [==============================] - 2s 1ms/step - loss: 0.4289 - accuracy: 0.8818
    Epoch 2/10
    1875/1875 [==============================] - 2s 1ms/step - loss: 0.2530 - accuracy: 0.9291
    Epoch 3/10
    1875/1875 [==============================] - 2s 1ms/step - loss: 0.2187 - accuracy: 0.9387
    Epoch 4/10
    1875/1875 [==============================] - 2s 1ms/step - loss: 0.1968 - accuracy: 0.9440
    Epoch 5/10
    1875/1875 [==============================] - 2s 1ms/step - loss: 0.1815 - accuracy: 0.9491
    Epoch 6/10
    1875/1875 [==============================] - 2s 1ms/step - loss: 0.1687 - accuracy: 0.9514
    Epoch 7/10
    1875/1875 [==============================] - 2s 1ms/step - loss: 0.1605 - accuracy: 0.9539
    Epoch 8/10
    1875/1875 [==============================] - 2s 1ms/step - loss: 0.1524 - accuracy: 0.9560
    Epoch 9/10
    1875/1875 [==============================] - 2s 1ms/step - loss: 0.1459 - accuracy: 0.9574
    Epoch 10/10
    1875/1875 [==============================] - 2s 1ms/step - loss: 0.1402 - accuracy: 0.9590

    You can notice how, as the epochs progress, the loss goes down and the accuracy goes up. This is what we want!

    However, knowing the labels to all the data basically makes those metrics useless – after all, you are just giving the model an answer – so we need to evaluate the model to see how well it could really do. We can achieve this by evaluating the model on test data – data the model has never seen before.

    The <model>.evaluate function takes the testing data, as well as the trained model, and evaluates the model, producing a set of metrics (also called scores) that show how well the model really did on unforeseen data.

    Although the function is taking the test labels, the function never shows this data to the neural network, only using it to grade the neural network on how well it did.

    Python
    test_loss, test_acc = model.evaluate(test_images, test_labels, verbose=2)
    Expected Output (may vary)
    313/313 - 0s - loss: 0.1657 - accuracy: 0.9528 - 347ms/epoch - 1ms/step

    As you saw above, both the loss and accuracy seem to be pretty low. This is because both the loss and accuracy are stored as precents in the form of decimals. This means that, for the output above, the loss is 16.57% and the accuracy is 95.28%. That is pretty good.

    Using Our Model

    First download this image to the same folder as the Python file, and name it test.jpg.

    Now, run the code below to predict our image using <model>.predict:

    Python
    path = "test.jpg"
    
    img = tf.keras.preprocessing.image.load_img(path, target_size=(28,28), color_mode = "grayscale")
    x = tf.keras.preprocessing.image.img_to_array(img)
    true_label = 3
    p_arr = predict_image(model, x)
    plot_value_array(p_arr, true_label, 1)
    Expected Output (may vary)
    Predicted Label: 2
    ...

    It probably got the answer wrong. This is because it’s used to inverted images, meaning light handwriting on dark paper. To do this, we simply need to invert the image colors:

    Python
    x_inv = 255-x

    And now we can run the prediction again:

    Python
    arr = predict_image(model, x_inv)
    plot_value_array(arr, 3, 1)
    Expected Output (may vary)
    Predicted Label: 3
    ...

    It probably got the answer correct. You have successfully built a neural network!

    Exporting The Model

    To do this, simply run the code below (which saves it to a file called my_model.h5:

    Python
    model.save('my_model.h5')

    Now if you ever want to refer to it again in another file, simply load in the sequential model:

    Python
    model = keras.models.load_model("my_model.h5", compile=False)

    Flaws in Our Code

    There are flaws in out model. Firstly, if you tried evaluating it on multiple images, you may have noticed that it was not accurate. This is because if we want it to recognize an image, we have to optimize it for that image.

    Because all of the training images were white on black, it has to do a lot of guessing when it gets confused on an image that is black on white.

    We can fix this with convolutional neural networks.

    It recognizes the small parts and details of an image, will be much more accurate, and will be better with more general data.

    Follow along for the next part, where I teach you how to optimize this with convolutional neural networks.

  • How to Listen on ATC Conversations Using a SDR

    How to Listen on ATC Conversations Using a SDR

    Did you know that ATC conversations and conversations between planes are freely available, with no encryption? It is legal to listen in on ATC conversations, and in this guide I will tell you how if you have some free time.

    What You Need

    RTL-SDR Stick and Antenna (x1)

    This is the antenna and radio processor we will be using to get a signal from an air traffic control tower.

    SDRSharp by Airspy

    This is the program that we will be using to listen to these conversations and to tune the antenna.

    Initial Setup

    If it is your first time using SDR# (SDRSharp), then you must install SDR#, then install the drivers. The below guide will show you how to do so.

    First, install SDR# and let the installation wizard guide you through the process.

    Then, open the newly added program Zadig and you should see a screen like the one below.

    • A: This is where you choose the interface you want to install drivers for
    • B: This is where you check if a driver was installed
    • C: This is where you can install the drivers

    Follow the steps below:

    • First, use dropdown A to select an interface. The interface must start with Bulk-in, Interface. If you have multiple bulk-in interfaces, repeat these steps for every one
    • Next, make sure textbox B tells you that there is no driver installed
    • Finally, click Install WCID Driver (button C)

    Opening SDR#

    Once all the drivers are installed, you may close out of Zadig and open SDR#. You should see a screen like the one below.

    • A: This is the frequency selector. This is where you can choose which frequency your antenna is supposed to be tuned to. Right now it is tuned to 120 MHz, but in the next section you will learn to find the frequency of your ATC tower
    • B: This is where you can choose your radio settings. For this tutorial, keep the default settings but change the radio mode to AM
    • C: This is where you choose the source of the radio stream. Right now you want it set to RTL-SDR USB
    • D: This is where you can visualize the radio waves. You can click anywhere on this to set the frequency to the location of the waves to which you clicked. You can drag the lighter waves to set the bandwidth. You want to make sure that the bandwidth is not too big otherwise you will get interference, but not too small so you only get part of the wave. I have set my bandwidth to 7.557 kHz

    Reading Aerospace Vector Maps

    Using a site, like SkyVector, you can find your airport and look at the frequency under it. Tune to that frequency. For place value context, think of the second segment of numbers as MHz SkyVector shows frequencies in megahertz.

    Some airports, like the ones marked with a star, do not have full-time ATC, meaning that planes have to talk directly to each other.

    Tune to this frequency on SDR#.

    Listening to these frequencies

    Look for any spikes in these frequencies. Ste the frequency to the frequency of these spikes (you can do this easily by clicking on these spikes) Adjust the bandwidth to these spikes, hovering over the top-right Zoom button and using the slider below it to zoom into the waves. Click on the top-left gear icon and adjust the setting to match the ones below:

    Now, turn the volume up and listen. If you do not hear talking, experiment with the bandwidth or choose another frequency. A good frequency should be like the one below:

    Done!

    And that is the end of the project! Pretty easy, right? There are some caveats, though. You will only get the best signal when you live no further than 50 kilometers away from an airport with a full-time ATC, and the radio tends to disconnect a lot if not screwed in fully. Either way, it is still a super cool project, and is definitely worth trying out if you are interested in this kind of thing. Frequencies might not be exact, so experiment a little!

  • How to make a GPS with Arduino

    This guide will show you how to make a simple GPS with Arduino.

    What you will need

    GPS Neo-6M

    This will be used to determine the location.

    Arduino UNO Rev3

    This will be what we use to control all the other components. Think of this as the motherboard of the build.

    Arduino IDE

    This will be used to program the Arduino

    About GPS

    Before we start this project, you need to know a little bit about GPS. Satellites will send signals on their distance from the module. Once four or more satellites are connected and giving out this data, the receiver can then use the data to figure out the exact location of the user.

    Credits: https://www.scienceabc.com/innovation/how-gps-global-positioning-system-works-satellite-smartphone.html

    The receiver will then present this data in the form of NMEA sentences. This is a standard communication developed by the National Marine Electronics Association. We will be using TinyGPS++ to parse these.

    Using the GPS with Arduino

    For this part, you don’t need the LCD. This will show you how to log the GPS output to the serial monitor. We will then parse this data using TinyGPS++.

    Preparations

    First, open Arduino IDE and you will be greeted with a blank sketch.

    At the top toolbar, click Ctrl + Shift + I to bring up the library manager and type “TinyGPSPlus” and install the top result.

    Sign up for our newsletter!

    ← Back

    Thank you for your response. ✨

    Code

    Now that we are all prepared, lets start writing the code. First, we include the library that helps us communicate with the GPS.

    #include <SoftwareSerial.h>

    Next, we include the library that parses NMEA sentences.

    #include <TinyGPSPlus.h>

    Now, declare the communication between the Arduino and the GPS and then the parser.

    SoftwareSerial ss(4,3);
    TinyGPSPlus gps;

    After that, we go inside the void setup() function and we initiate the communication between the computer and the Arduino and the GPS and Arduino.

    Serial.begin(9600);
    ss.begin(9600);

    Next, we go into void loop() and specify that whatever is below this line of code should only happen when the Arduino receives a signal.

    while (ss.available() > 0)

    Then, we encode the data in a format that the GPS can then parse.

    gps.encode(ss.read());

    Then, we create an if block so the serial monitor only displays our data when the data the GPS is outputting is valid.

    if (gps.location.isUpdated()) {
    
    }

    Now, inside the if block, we can access all the data and print it to the serial monitor.

    Serial.print("Latitude= ");
    Serial.print(gps.location.lat(), 6); //6 for 6 decimal places
    Serial.print("Longitude= ");
    Serial.print(gps.location.lng(), 6); //6 for 6 decimal places

    Your full code should look like this:

    #include <SoftwareSerial.h>
    #include <TinyGPSPlus.h>
    
    SoftwareSerial ss(4,3);
    TinyGPSPlus gps;
    
    void setup() {
      // put your setup code here, to run once:
      Serial.begin(9600);
      ss.begin(9600);
    }
    
    void loop() {
      // put your main code here, to run repeatedly:
      while (ss.available() > 0)
      gps.encode(ss.read());
    
      if (gps.location.isUpdated()) {
        Serial.print("Latitude= ");
        Serial.print(gps.location.lat(), 6); //6 for 6 decimal places
        Serial.print("Longitude= ");
        Serial.print(gps.location.lng(), 6); //6 for 6 decimal places
      }
    }
    

    Wiring

    The wiring is shown below:

    • GPS RX > Digital 4 on Arduino
    • GPS TX > Digital 3 on Arduino
    • GPS VCC > Power 3.3V on Arduino
    • GPS GND > Power GND on Arduino

    Uploading

    Now, with the Arduino IDE opened and the code ready, press Ctrl + U on your keyboard. The code will write and start outputting to the serial monitor, which you can access by pressing Ctrl + Shift + M on your keyboard or by going to the top toolbar and clicking Tools > Serial Monitor. The GPS will take a couple minutes to get its location. You may want to stick the antenna outside for this, as it will take a long time to get its location indoors.

    Soon, you will be able to view the data coming in.

  • How to Install MacOS on Windows using VMware

    If you had ever wanted to get the MacOS experience without owning a Mac, then you may want to use a virtual machine to get the experience, and this guide will teach you how to do just that by installing MacOS Ventura on a virtual machine.

    What You Will Need

    MacOS Ventura ISO

    This is the ISO file for MacOS Ventura.

    NOTE This file is not from me and I simply found it on the internet. As of right now, VirusTotal did not get any positives on this file and it appears to work fine. This may change in the future, so beware.

    VMware Workstation Pro

    This will be our hypervisor

    NOTE You may be able to find a cracked version or use a pirated license key, but that is illegal and not recommended.

    Step 1: Patching VMware Pro

    Download the unlocker script from GitHub, and path VMware Workstation Pro. Make sure VMware Pro is patched before you do this, otherwise you might not install the patch correctly.

    After running the patch, you should see MacOS as an option for your VM.

    Step 2: Creating the VM

    In the VM settings, create a new VM with the version of MacOS 12 and supply your ISO. Give the VM at least 16 gigabytes of RAM, and at least 80 gigabytes of hard drive storage. Give the VM a name, and click “Finish”

    Step 3: First Boot and Setup

    Click on Power on this virtual machine. Once the operating system has loaded, select your language and double-click on Disk Utility

    Look for VMware Virtual SATA Hard Drive Media. Select it, then click Erase.

    A menu will pop up. Change the name to whatever you want, and set the scheme to whatever you want. I set it to MacOS Extended (Journaled), but the most common format is GUID Partition Map. I would still recommend MacOS Extended, though. After you type your name and your scheme, click Erase.

    The virtual disk will now begin erasing and formatting.

    Once the virtual disk is done formatting, close out of the window and click Install MacOS 13 Beta.

    The installer will then continue. The install should take around 15-20 minutes.

    Once the VM finishes installing, wait for the machine to fully boot. Once it is bootes, shut down the VM and go to the VM’s settings.

    Click on CD/DVD (SATA) and make sure that Connect at power on is disabled.

    Power on the VM and continue with setup.

    NOTE There have been problems with people’s screen going black and then rebooting after setting up network options. Make sure you do not set up Wi-Fi during setup, and set it up when you are in the desktop environment.

    VMware Tools (Darwin)

    Click on VM > Install VMware Tools… and continue with the setup on MacOS

    Limitations

    The following are limitations that you might want to consider before trying this project:

    • Hardware acceleration is basically nonexistant
    • Might have random crashes
    • Might have bugs and be slow
    • You will not get support from Apple.
  • How to disconnect WiFi devices on another network using the ESP8266

    There is a common WiFi attack that can disconnect any device on the network you are currently on. It also works on networks that you are not currently on.

    How it works

    There is a protocol in WPA2 that lets you disconnect a device safely from a network. However, these packets are not encrypted, so anyone with a WiFi-enabled device (like the ESP8266) can fake these packets.

    Installing the software

    First, go to the ESP8266 deauther page and download the latest release and download the file esp8266_deauther_[VERSION]_NODEMCU.bin.

    Next, download the ESP8266 Flasher and run the utility. Flash the binary file (the .bin you downloaded) at 0x00000. Click Flash and wait till it completes.

    Running attacks

    On a device, connect to the Wi-Fi network named pwned and enter the password deauther. Next, go to the IP address 192.168.4.1 and click I have read and understood the risks and choose a target network. Under the Attacks tab, begin the attack Deauth.

    A note on WPA3

    WPA3, which was passed as an official WiFi protocol, encrypts these packets so hackers cannot abuse them. However, some WiFi routers still use WPA2, so this attack will still work sometimes.

  • How to Install the Play Store on Windows 11

    You can sideload apps on Windows 11, but you are missing out on some key features by sideloading, like Google Play Services. With this tutorial, you will be able to run the Play Store directly on Windows, and you can install apps without sideloading.

    Step 1: Uninstall any existing WSA instances

    Below are the following steps:
    • Press Windows Key + I to open the settings menu
    • Click Apps
    • Click Installed Apps
    • Find Windows Subsystem For Android™, and click the three dots
    • Click Uninstall, and confirm the prompt

    Step 2: Enable Developer Options

    To do this,

    • Open Settings with the keyboard shortcut Windows Key + I
    • Click on Privacy and Security
    • Click For Developers under the Security section
    • Enable Developer Mode, and click on Yes when asked to confirm

    Step 3: Allow the Windows Hypervisor Platform to Run

    If you have installed WSA previously, or have the Virtual Machine Platform and the Windows Hypervisor Platform enabled already, you can feel free to skip this step. However, it is always best to make sure these necessary features are enabled. If you have no idea what I am talking about or have not installed WSA before, these steps are necessary:

    • Press Windows Key + S to open Search
    • Type “Turn Windows Features on or off” and click Enter
    • Look for Virtual Machine Platform and Windows Hypervisor Platform and enable them
    • Click OK and restart the machine when asked to.

    Step 4: Download a modified WSA Installer

    • First, go to the MagiskOnWSA GitHub repository. Create an account if you don’t already have one or log in to GitHub.

    Note:

    The current GitHub repsoitory (LPosed/MagiskOnWSA) has been disabled by GitHub due to violations of the terms of service. I will update this if the repository comes back online, but for now, the repository is offline. It is also highly unlikley that the repository will come back online, as it has been down for a couple months now. However, there are still some mirrors and unmodified copies of the original that are still up. Some are listed below:

    • Then, on the GitHub repository, click Fork and wait till you see the Forked From menu. This repository is somewhat large, so give it some time to fork.
    • Once on your newly forked repository, click the Actions tab.
    • Now, if you receive the Workflows aren’t being run on this forked repository prompt, click I understand my workflows, go ahead and enable them.
    • Now, with workflows enabled, look for the workflow Build WSA in the left-hand pane and click it. After that, click Run Workflow.
    • In the flyout menu that shows up, keep all settings set to default but Variants of GApps. Under Variants of GApps, select (or type in) pico. If you are typing, make sure that the text you are typing into the text field is exact.

    Note:

    You can select other variants if you know what you are doing.

    • Now, click Run Workflow. After that, you should see a message at the top that says Workflow run was successfully requested. Be patient with the workflow, it usually takes a couple minutes to complete.
    • Once the workflow completed, click the workflow name and scroll down to the Artifacts section.
    • You will see two links to download files. Click on the file name corresponding to your CPU version, (arm64 or x64) and download the file. It is quite large, so store it accordingly. The file may take a while to download.
    • Right-click the ZIP file and click Extract, and choose the directory of your choice.

    Step 5: Finishing Up

    • In thhe newly created directory, look for the Install.ps1 file and right-click on it.
    • Click Open with PowerShell.
    • Click Open if asked to confirm the action.
    • The script will show the Operation Completed Successfully message on execution.
    • Be patient while the script installs the final bits of WSA (Windows Subsystem for Android) and you will see a few installation notifications, and the script will exit after installing.
    • If prompted, click Allow Access so that Windows lets the Windows Subsytem for Android package on your network.
    • Click the start menu and type Windows Subsystem for Android. Open the application shown in search results.
    • Switch Developer Mode on if it is not already.
    • Click on Manage Developer Settings under the Developer Mode switch. This will restart WSA.
    • Now, when you open the Start Menu, you should see Play Store as one of the options. Open the app.
    • Allow the app through Windows Defender Firewall.
    • Now, you can open the Play Store just like any other app and sign in with your Google account. It will preform it’s usual synchronization, and then you can install Play Store apps on Windows 11.

    Note:

    If you can’t sign in, click Finish Setup in the system tray notification.

    Done!

  • Using the Raspberry Pi to Feed to FlightRadar24, FlightAware, and ADS-B Exchange

    Using the Raspberry Pi to Feed to FlightRadar24, FlightAware, and ADS-B Exchange

    In this post, I will be showing you how to start feeding flight data to three services: Flightradar24, ADS-B Exchange, and FlightAware. This post may be helpful to some of you who want to run your own flight feeder, and can’t tell which is a better service to feed to.

    The main benefit of running your own Raspberry Pi flight tracker is that you will get paid accounts for free. For example, you will get a free FlightAware enterprise account, a subscription valued at around $89.95 a month. FlightRadar24 will also give you a free business plan, which costs $49.99 a month. If you also want to support non-proprietary websites, you can also give flight data to ADS-B Exchange.

    What you will need (hardware)

    Below are the parts you will need:

    Raspberry Pi 3B (x1, required)

    The Raspberry Pi will be our microcontroller (This also works with other models, just not the Zero or the Pico; you can see FlightAware’s compatibility list here)

    ADS-B Dongle (x1, required)

    This will pick up ADS-B signals from nearby aircraft.

    MicroSD Card (at least 8GB, x1, required)

    The Raspberry Pi will be our microcontroller.

    Flashing our PiAware image

    To begin the installation, we will have to first start feeding to FlightAware. To begin, first, create an account at their website, or log in at their website.

    Now download the PiAware image (direct download for 7.2), and download Etcher. Then flash Etcher to your device using the PiAware ISO.

    Note that you must select the correct drive, as this will erase your drive.

    Configuring the ISO

    If you want to enable SSH on your PiAware, or you have a wireless network configuration that you want to set (this will be typically everyone, unless you are using an ethernet cable on your Raspberry Pi), you must follow the below steps to configure the new operating system to use your configurations. You can refer to the configurations at FlightAware’s Website, or you could not set a default configuration, and once the PiAware has booted up, configure it using Bluetooth.

    Booting it up

    Now, you can put it all together! Connect your ADS-B antenna to your Raspberry Pi via USB, and then put the SD card through the back. Then, plug in the HDMI cable (and ethernet if you are going to be using it), and power it on.

    Now, once the Raspberry Pi has booted up, you should see a screen showing the PiAware Status. If you did this correctly, it should be connected. You will also need to connect a keyboard if you do not know your PiAware’s IP address. If it asks you for credentials, the default is pi for username, and raspberry for password.

    Setting up the FlightAware feeder

    Now we get to the fun part! Now, we set up the feeders. Let’s start off with the FlightAware feeder. Since we flashed the custom ISO file, FlightAware is going to be installed, just not set linked to an account. Create a basic plan FlightAware account at their website if you don’t already have one, and claim your PiAware. Once that is set up, make sure you are connected to the same network as your PiAware. It will come in handy for later. Once you do that, make sure you are still on the status page on your PiAware, click Alt+F2 (or whatever key it says to press to open the terminal and run commands). If it asks you for credentials, the default is pi for username, and raspberry for the password (unless it is set otherwise, of course). Now run the following command:

    BASH
    
    hostname -I

    This should return your Pi’s IP address. Now, on another device, navigate to your IP address on the SkyAware page. For example, if my Pi’s IP address is 192.168.1.1, I will navigate to the following website:

    URL
    
    http://192.168.1.1/skyaware

    After that, you should see a map with all the aircraft you are tracking. You have successfully set up FlightAware! After some time, your basic account will be upgraded, and you can view your ADS-B statistics.

    Setting up FlightRadar24

    Now, open the terminal and run the following command:

    BASH
    
    sudo bash -c "$(wget -O - http://repo.feed.flightradar24.com/install_fr24_rpi.sh)"

    You will then be asked some questions about antenna position, fr24 sharing key, and other things.

    Now, we need to configure FlightRadar24. To begin, sign up for an account at their official website. Note that all you need to do is sign up for a free account and do not select any paid plans. This is because your account will automatically be upgraded at the end of this tutorial.

    Run the following command to enter configuration:

    BASH
    
    sudo fr24feed --signup

    You will be asked questions about you and the antenna that you are using. Answer the questions similar to the ones below:

    • Email Address: Enter the same email address you used to sign up for FlightRadar24. This is the same email address that your sharing key will be sent to, and the same email address that your account on will be upgraded.
    • FR24 Sharing key: If you have never set up a feeder or have never got a sharing key from FlightRadar24, leave this blank. If not, enter your FlightRadar24 sharing key.
    • Participating in MLAT Calculations: Answer yes, unless you know you don’t want it or need it.
    • Autoconfiguration for dump1090 (if asked): Yes
    • Latitude & Longitude: Use a website like latlong.net to find your latitude and longitude. It is best to be as accurate as possible. Enter this question in the form of XX.XXXX and XX.XXXX (leave out any extra numbers).
    • Altitude: This is your altitude from sea level. You can use whatismyelevation.com to find your altitude.
    • Receiver Selection: If you are using a DVB-T (the type I put in the parts list) stick then I strongly recommend option 1. If you encounter an error regarding dump1090 in this tutorial, restart the tutorial and click option 4. If you do not have a DVB-T stick, check out your other options.
    • Dump1090 Arguments (if asked): Leave this blank and hit enter.
    • Raw Data Feed: No, unless you know what you are doing.
    • Basestation Data feed: No unless you know what you are doing.
    • Logfile Mode: 48-hour, 24 rotation.
    • Logfile Path: This will be the path that the log file is saved to. If you want to use a custom path for logs, put it here. If not, stick with the default and hit enter.

    FlightRadar24’s configuration should return that everything is correctly set up. The program should also give you a sharing key. Save this key as you may need it later in the future.

    To begin feeding ADS-B data to FlightRadar24, enter the command below. Note that MLAT or general feeding might take some time to show up. For me, it took 30 minutes before the feeder was actively sending data to FlightRadar24:

    BASH
    
    sudo systemctl restart fr24feed

    You can go to the data page and view your feeder statistics. If you want to access the web UI for FlightRadar24, then go to your Raspberry Pi’s IP address (remember, you can access it with sudo hostname -I), and access it via a web browser on port 8754, unless set otherwise. For example, my Raspberry Pi’s IP address is 192.168.1.252, so I access it by using http://192.168.1.252:8754.

    Also, it is important to note that it may take some time for the receiver to start working and sending data. For me, it took 30 minutes before flight data was sent to the services I was feeding to.

    Setting up MLAT for FlightAware

    If you want to set up MLAT configurations on FlightAware (we highly recommend doing so, it can increase the amount of positions seen), then follow our steps.

    First, go to your FlightAware data sharing page and clcik the gear icon next to the nearest airport, labeled in orange.

    Then, enable MLAT and Mode S Alliteration. Put in the same details as you did for FlightRadar24, or new details if you have to.

    Setting up ADS-B Exchange

    First, we need to download ADS-B Exchange. You can do that with the following command:

    BASH
    
    sudo bash -c "$(wget -nv -O - https://raw.githubusercontent.com/adsbxchange/adsb-exchange/master/install.sh)"
    

    You will be asked a couple questions. For the first one, type in a random username, but note that this username will be public. Next, enter the details it asks for, and it will begin configuring. Note that this may take a while.

    Next, run the following command:

    BASH
    
    sudo bash /usr/local/share/adsbexchange/git/install-or-update-interface.sh

    The script should output a sharing key. You can use this to view your feeder statistics at the official website of ADS-B Exchange. You should also be able to access your web interface on the adsbx page. This will be your Raspberry Pi’s IP address, with /adsbx at the end. For me, the URL was http://192.168.1.252/adsbx.

    Aftermath

    PiAware status (FlightAware)
    SkyAware Map (FlightAware)
    Data sharing Page (FlightAware)
    Flight map (tar1090) (ADS-B Exchange)
    Status Page (FlightRadar24)
    Data sharing page (FlightRadar24)
  • Tensors Dimensions and Basics in Python Artificial Intelligence and Machine Learning

    In PyTorch and TensorFlow, Tensors are a very popular way of storing large amounts of data in artificial intelligence projects. Here, I will show you what they are, and how they work.

    What makes a Tensor?

    Tensors are made up of Scalars, Vectors, and Matrixes. Scalars are single numbers. Vectors are a line of numbers, and Matrixes are, as the name suggests, Matrixes, or tables, of numbers.

    Here is an example: If you are making an image, you can think of Matrixes as images, Scalars as pixels or dots, and Vectors like rows. You can think of Tensors as a Matrix that contains Matrixes.

    Yellow: Main tensor

    Red: Matrix 1

    Cyan/Light Blue: Matrix 2

    Orange: Vectors

    Green: Scalars

    Matrix dimension

    Matrixes are tables of numbers, so the number of rows and columns in the matrix is the matrix dimension. Below is an example.

    12
    34

    There are two rows and two columns in this table of numbers or matrix, so the dimensions of this matrix are two by two. Below is another example.

    1234
    1234
    1234
    1234

    What I showed you had four rows and columns, so the matrix above is a four-by-four matrix.

    Tensor Dimension

    Tensor dimensions are made up of three things. Earlier in this post, I mentioned how a tensor is a matrix containing matrixes. The first dimension of a tensor is how many matrixes the tensor should have in it. The next two dimensions are the dimensions you want each matrix to have. For example,

    1234
    5678
    9101112
    13141516

    would be a 4×4 matrix. If you wanted four four-by-four matrixes, you would need to make the first dimension (the number of matrixes to be in the tensor, which, as I said, is a matrix full of matrixes) four. Then, you would want 4×4 matrixes, so you would input the next two dimensions as 4 and 4 for a 4×4 tensor.

    Tips

    • If you do not input your first dimension (the number of matrixes in the tensor) into a tensor, the number defaults to 1.
    • Tensors are useful for storing mass amounts of data.
    • One of the easiest ways to make a tensor with custom values would be to have a loop running into every scalar in the tensor, thus making every scalar something you choose.
    • Tensors, when stored, are stored unevaluated. This means that your actual data, typically the data you would be storing in a tensor would be numbers, is not actually stored raw, but rather compressed, which makes tensor storage much easier for the machine’s memory, since the data is significantly less complicated. This is what makes tensors so popular for the storage of mass data. If you want to see the actual, uncompressed data of a tensor, you must evaluate it. You can do this with a simple function in both PyTorch and TensorFlow.
  • Everything to know about “Follina” (CVE-2022-30190)

    “Follina”, or CVE-2022-30190 is a widely used exploit that allows an attacker to remotely execute Powershell code on Windows machines from a Microsoft Word document or a URL.

    What it does

    Follina can do anything the attacker desires. Follina is a remote-code execution scheme, which means that a hacker can run any code the hacker wants on your machine without your knowledge. Some examples include lateral movement, privilege escalation, and the ability to steal browser credentials.

    How it works

    Follina takes advantage of URL protocols. URL protocols are used to run applications from a URL. Many people let this happen, mainly because URL protocols are not supposed to invoke code from applications. For example, if you are on a Windows machine, and you type “ms-calculator://” into the address bar, Windows should launch the calculator. However, the specific URL protocol Follina takes advantage of is “ms-msdt://”. This will launch the Microsoft Support Diagnostics Tool, which is mainly used by support professionals to gain information about your system. If you put some special parameters into the URL though, you can trick the program into running Powershell code, and sending the results to the hacker.

    How it was discovered

    The thing about Follina is that it was first discovered early-to-mid April of 2022 by someone who goes by the name of “crazyman” as part of the Shadow Chaser Group. However, Microsoft dismissed the threat, stating that it is not a security issue.

    The support representative stated that his sample did not work at his lab. MSDT requires a password on startup, but the original script had enough junk and padding to make this file over 4096 bytes, and according to my tests and speculations, MSDT will only open if the exploiting file is over 4096 bytes. Also, this can be exploited through Rich Text Documents and URLs or URL shortcuts, not just Word documents. Another reason that this is a dangerous threat is that, when saved as a Rich Text file, simply navigating to it and opening it up in the preview pane in File Explorer could trigger the execution, meaning that you don’t even have to open the file for the code to be invoked.

    However, this file was only brought to the community’s attention when a Twitter user by the name of “nao_sec”, was looking for documents on VirusTotal using an older exploit, CVE-2021-40444, found this document and alerted the community about it.

    How to protect yourself

    Because Follina is a zero-day exploit, there is no guaranteed patch. There is one solution that Microsoft acknowledged. The solution disables the “ms-msdt://” URL protocol.

  • How to make permanent bash aliases and bash functions

    In the last post, I covered how to make bash aliases. However, creating bash aliases with the alias command only lets you create bash aliases for the current session. For example, if I were to use alias hi="echo hi" to make a bash alias that connects hi with the command echo hi, and then exited the terminal session, all the bash aliases would reset. However, there is a way to make them permanent.

    Creating Permanent Bash aliases

    To make Bash aliases permanent, you will need to add lines to the ~/.bash_profile or ~/.bashrc file. Once you find out that file, open it with the text editor of your choice. I prefer nano.

    Make sure to run the below command after you are done modifying those files:

    BASH SHELL
    source ~/.bash_profile

    The below commands are how you get into the file.

    BASH SHELL
    cd ~
    nano .bashrc

    Then add your aliases.

    BASH CONFIGURATION
    # Aliases
    # alias alias_name="command_to_run"
    
    
    # If you are new to the shell, note that the Linux system ignores lines that start with '#' so this line does not mean anything to Linux
    
    # Long format list
    alias ll="ls -la"
    
    # Print my public IP
    alias myip="curl ipinfo.io/ip"
    
    # That 'hi' example from earlier
    alias hi="echo hi"

    Now when you reboot your Linux system, the Bash aliases should still work.

    Bash aliases with command arguments (Bash functions)

    Sometimes you may need to create a Bash alias that will change depending on certain values or will accept arguments. The syntax for Bash functions is relatively easy. They can be declared in two separate formats. It is preferred to declare functions using the ~/.bashrc file.

    TIP: If you want your .bashrc file to be more modular, you can store your aliases in a separate folder. Some distributions of Linux like Ubuntu or Debian include a separate .bash_aliases file.

    The syntax is below. Way 1 is the preferred way and the most used one.

    Way 1 (Multi-Line):

    BASH CONFIGURATION
    function_name () {
      commandsToRun
    }

    Way 1 (Single-Line):

    BASH CONFIGURATION
    function_name () { commandsToRun; }

    Way 2 uses the keyword of function, and then the function name, and after that the commands to run. The code below demonstrates way 2 (multi-line):

    BASH CONFIGURATION
    function function_name {
      commandsToRun
    }
    

    Way 2 (Single-Line):

    BASH CONFIGURATION
    function function_name { commandsToRun; }

    A few things that are nice to know:

    • The commands that the function will run are between the curly braces ({}). This is called the body of the function. The curly braces must be separated from the body by newlines or spaces. In this case, function A {commandsToRun;} or A () {commandsToRun;} will not work, but something like function A { commandsToRun; } or A () { commandsToRun; } which has spaces separating the body of the function from the function data will.
    • Simply defining a function will not execute it. If you want to invoke or execute a bash function simply use the name of the function. The body of the function is executed when you invoke it in the shell script.
    • You must define the function before calling or executing it.

    You can pass any number of arguments in a function. If you want to call a Bash function right after the function’s name, separated by a space. The passed parameters can be referenced with $1, $2, $3, $4, etc., corresponding to the position of the argument after the name of the function when it is executed. Below is a simple bash function that will create a directory and then navigate into it:

    BASH/SH
    mkcd () {
      mkdir -p -- "$1" && cd -p -- "$1"
    }

    Same as aliases, make sure to add the Bash functions in the ~/.bashrc file and run source ~/.bash_profile to make the changes take effect.

    Now instead of having to make a directory using mkdir and then move into that directory using cd, you can simply use our mkcd function:

    BASH SHELL
    mkcd testDirectory

    A few things that you need to know:

    • The -- makes sure that Linux does not get confused and parse the folder name as an argument. If you have a folder that starts with -, Linux might parse it as an argument.
    • The && ensures that the second command runs only if the first one is successful.

    Final Note

    By now you should have a good understanding of how Bash aliases and functions work. This should help you be more productive on the command line.

    If you have any questions or feedback, feel free to leave a comment on this post.