IT Threat Detection with Similarity Search

This notebook shows how to use Pinecone similarity search to build an application for detecting rare events. Such application is common in cyber-security and fraud detection domains wherein only a tiny fraction of the events are malicious.

Open Notebook View Source

Here we will build a network intrusion detector. Network intrusion detection systems monitor incoming and outgoing network traffic flow, raising alarms whenever a threat is detected. Here we use a deep-learning model and similarity search in detecting and classifying network intrusion traffic.

We will start by indexing a set of labeled traffic events in the form of vector embeddings. Each event is either benign or malicious. The vector embeddings are rich, mathematical representations of the network traffic events. It is making it possible to determine how similar the network events are to one another using similarity-search algorithms built into Pinecone. Here we will transform network traffic events into vectors using a deep learning model from recent academic work.

We will then take some new (unseen) network events and search through the index to find the most similar matches, along with their labels. In such a way, we will propagate the matched labels to classify the unseen events as benign or malicious.

Mind that the intrusion detection task is a challenging classification task because malicious events are sporadic. The similarity search service helps us sift the most relevant historical labeled events. That way, we identify these rare events while keeping a low rate of false alarms.

Set Up Pinecone

We will first install and initialize Pinecone. You can get your API Key here.

!pip install -qU pinecone-client
import pinecone
import os
# Load Pinecone API key
api_key = os.getenv('PINECONE_API_KEY') or 'YOUR_API_KEY'
pinecone.init(api_key=api_key)
#List all present indexes associated with your key, should be empty on the first run
pinecone.list_indexes()

Install Other Dependencies

!pip install -qU pip python-dateutil tensorflow scikit-learn matplotlib seaborn
from collections import Counter
import matplotlib.pyplot as plt
import pandas as pd
import seaborn as sns
from tensorflow import keras
from keras.models import Model
import tensorflow.keras.backend as K
from sklearn.metrics import accuracy_score, precision_score, recall_score
from sklearn.metrics import confusion_matrix

We will use some of the code from a recent academic work. Let’s clone the repository that we will use to prepare data.

!git clone -q https://github.com/rambasnet/DeepLearning-IDS.git 

Define a New Pinecone Index

# Pick a name for the new service
index_name = 'it-threats'
# Make sure service with the same name does not exist
if index_name in pinecone.list_indexes():
    pinecone.delete_index(index_name)

Create an index

pinecone.create_index(name=index_name,metric='euclidean', shards=2)
{'msg': '', 'success': True}

Connect to the index

We create an index object, a class instance of pinecone.Index , which will be used to interact with the created index.

index = pinecone.Index(name=index_name, timeout=600)

Upload

Here we transform network events into vector embeddings, then upload them into Pinecone’s vector index.

Prepare Data

The datasets we use consist of benign (normal) network traffic and malicious traffic generated from several different network attacks. We will focus on web attacks only.

The web attack category consists of three common attacks:

  • Cross-site scripting (BruteForce-XSS),
  • SQL-Injection (SQL-Injection),
  • Brute force administrative and user passwords (BruteForce-Web)

The original data was recorded over two days.

Download data for 22-02-2018 and 23-02-2018

Files should be downloaded to the current directory. We will be using one date for training and generating vectors, and another one for testing.

!wget "https://cse-cic-ids2018.s3.ca-central-1.amazonaws.com/Processed%20Traffic%20Data%20for%20ML%20Algorithms/Thursday-22-02-2018_TrafficForML_CICFlowMeter.csv" -q --show-progress
!wget "https://cse-cic-ids2018.s3.ca-central-1.amazonaws.com/Processed%20Traffic%20Data%20for%20ML%20Algorithms/Friday-23-02-2018_TrafficForML_CICFlowMeter.csv" -q --show-progress

Let’s look at the data events first.

data = pd.read_csv('Friday-23-02-2018_TrafficForML_CICFlowMeter.csv')
data.Label.value_counts()
Benign              1048009
Brute Force -Web        362
Brute Force -XSS        151
SQL Injection            53
Name: Label, dtype: int64

Clean the data using a python script from the cloned repository.

!python DeepLearning-IDS/data_cleanup.py "Friday-23-02-2018_TrafficForML_CICFlowMeter.csv" "result23022018"
cleaning Friday-23-02-2018_TrafficForML_CICFlowMeter.csv
total rows read = 1048576
all done writing 1042868 rows; dropped 5708 rows

Load the file that you got from the previous step.

data_23_cleaned = pd.read_csv('result23022018.csv')
data_23_cleaned.head()
Dst PortProtocolTimestampFlow DurationTot Fwd PktsTot Bwd PktsTotLen Fwd PktsTotLen Bwd PktsFwd Pkt Len MaxFwd Pkt Len MinFwd Pkt Len MeanFwd Pkt Len StdBwd Pkt Len MaxBwd Pkt Len MinBwd Pkt Len MeanBwd Pkt Len StdFlow Byts/sFlow Pkts/sFlow IAT MeanFlow IAT StdFlow IAT MaxFlow IAT MinFwd IAT TotFwd IAT MeanFwd IAT StdFwd IAT MaxFwd IAT MinBwd IAT TotBwd IAT MeanBwd IAT StdBwd IAT MaxBwd IAT MinFwd PSH FlagsBwd PSH FlagsFwd URG FlagsBwd URG FlagsFwd Header LenBwd Header LenFwd Pkts/sBwd Pkts/sPkt Len MinPkt Len MaxPkt Len MeanPkt Len StdPkt Len VarFIN Flag CntSYN Flag CntRST Flag CntPSH Flag CntACK Flag CntURG Flag CntCWE Flag CountECE Flag CntDown/Up RatioPkt Size AvgFwd Seg Size AvgBwd Seg Size AvgFwd Byts/b AvgFwd Pkts/b AvgFwd Blk Rate AvgBwd Byts/b AvgBwd Pkts/b AvgBwd Blk Rate AvgSubflow Fwd PktsSubflow Fwd BytsSubflow Bwd PktsSubflow Bwd BytsInit Fwd Win BytsInit Bwd Win BytsFwd Act Data PktsFwd Seg Size MinActive MeanActive StdActive MaxActive MinIdle MeanIdle StdIdle MaxIdle MinLabel
02261.519374e+0915326981111117919696480107.181818196.2451629760179.0364.1864912053.89450514.3537747.298562e+049.751916e+04207592111532698153269.81.066585e+05246403201325840132584.0106034.7808272475496700003603607.1768877.1768870976136.869565282.79390379972.391304000100001143.090909107.181818179.0000000111179111969292002307320.00.0000.00.000000e+0000Benign
1500171.519374e+091175738553015000500500500.0000000.000000000.00.00000012.7579380.0255165.878693e+072.375324e+07755830064199084911757385558786927.52.375324e+07755830064199084900.00.0000000000002400.0255160.000000500500500.0000000.0000000.000000000000000666.666667500.0000000.00000003150000-1-1280.00.00058786927.52.375324e+077558300641990849Benign
2500171.519374e+091175738483015000500500500.0000000.000000000.00.00000012.7579390.0255165.878692e+072.375325e+07755830074199084111757384858786924.02.375325e+07755830074199084100.00.0000000000002400.0255160.000000500500500.0000000.0000000.000000000000000666.666667500.0000000.00000003150000-1-1280.00.00058786924.02.375325e+077558300741990841Benign
32261.519374e+0917453921111117919696480107.181818196.2451629760179.0364.1864911803.60629612.6046188.311390e+041.119720e+05242608121745392174539.21.211090e+05275228201509435150943.5121013.1658362734428100003603606.3023096.3023090976136.869565282.79390379972.391304000100001143.090909107.181818179.0000000111179111969292002307320.00.0000.00.000000e+0000Benign
4500171.519374e+09894834746030000500500500.0000000.000000000.00.00000033.5257440.0670511.789669e+071.534523e+074198957640003648948347417896694.81.534523e+0741989576400036400.00.0000000000004800.0670510.000000500500500.0000000.0000000.000000000000000583.333333500.0000000.00000006300000-1-1584000364.00.04000364400036421370777.51.528092e+07419895767200485Benign
data_23_cleaned.Label.value_counts()
Benign              1042301
Brute Force -Web        362
Brute Force -XSS        151
SQL Injection            53
Name: Label, dtype: int64

Load the Model

Here we load the pretrained model. The model is trained using the data from the same date.

We have modified the original model slightly and changed the number of classes from four (Benign, BruteForce-Web, BruteForce-XSS, SQL-Injection) to two (Benign and Attack). In the step below we will download and unzip our modified model.

!wget -q -O it_threat_model.model.zip "https://drive.google.com/uc?export=download&id=1VYMHOk_XMAc-QFJ_8CAPvWFfHnLpS2J_" 
!unzip -q it_threat_model.model.zip

model = keras.models.load_model('it_threat_model.model')
model.summary()
Model: "sequential"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
dense (Dense)                (None, 128)               10240     
_________________________________________________________________
dense_1 (Dense)              (None, 64)                8256      
_________________________________________________________________
dense_2 (Dense)              (None, 1)                 65        
=================================================================
Total params: 18,561
Trainable params: 18,561
Non-trainable params: 0
_________________________________________________________________
# Select the first layer
layer_name = 'dense' 
intermediate_layer_model = Model(inputs=model.input,
                                 outputs=model.get_layer(layer_name).output)

Upload Data

Let’s define the item’s ids in a way that will reflect the event’s label. Then, we index the events in Pinecone’s vector index.

from tqdm import tqdm
items_to_upload = []

model_res = intermediate_layer_model.predict(K.constant(data_23_cleaned.iloc[:,:-1]))

for i, res in tqdm(zip(data_23_cleaned.iterrows(), model_res), total=len(model_res)):
    benign_or_attack = i[1]['Label'][:3]
    items_to_upload.append((benign_or_attack + '_' + str(i[0]), res))
100%|██████████| 1042867/1042867 [01:22<00:00, 12599.71it/s]

You can lower the NUMBER_OF_ITEMS and, by doing so, limit the number of uploaded items.

NUMBER_OF_ITEMS = len(items_to_upload)

upsert_acks = index.upsert(items=items_to_upload[:NUMBER_OF_ITEMS])
0it [00:00, ?it/s]

Let’s verify all items were inserted.

index.info()
InfoResult(index_size=1042867)

Query

First, we will randomly select a Benign/Attack event and query the vector index using the event embedding. Then, we will use data from different day, that contains same set of attacks to query on a bigger sample.

Evaluate the Rare Event Classification Model

We will use network intrusion dataset for 22-02-2018 for querying and testing the Pinecone.

First, let’s clean the data.

!python DeepLearning-IDS/data_cleanup.py "Thursday-22-02-2018_TrafficForML_CICFlowMeter.csv" "result22022018"
cleaning Thursday-22-02-2018_TrafficForML_CICFlowMeter.csv
total rows read = 1048576
all done writing 1042966 rows; dropped 5610 rows
data_22_cleaned = pd.read_csv('result22022018.csv')
data_22_cleaned.head()
Dst PortProtocolTimestampFlow DurationTot Fwd PktsTot Bwd PktsTotLen Fwd PktsTotLen Bwd PktsFwd Pkt Len MaxFwd Pkt Len MinFwd Pkt Len MeanFwd Pkt Len StdBwd Pkt Len MaxBwd Pkt Len MinBwd Pkt Len MeanBwd Pkt Len StdFlow Byts/sFlow Pkts/sFlow IAT MeanFlow IAT StdFlow IAT MaxFlow IAT MinFwd IAT TotFwd IAT MeanFwd IAT StdFwd IAT MaxFwd IAT MinBwd IAT TotBwd IAT MeanBwd IAT StdBwd IAT MaxBwd IAT MinFwd PSH FlagsBwd PSH FlagsFwd URG FlagsBwd URG FlagsFwd Header LenBwd Header LenFwd Pkts/sBwd Pkts/sPkt Len MinPkt Len MaxPkt Len MeanPkt Len StdPkt Len VarFIN Flag CntSYN Flag CntRST Flag CntPSH Flag CntACK Flag CntURG Flag CntCWE Flag CountECE Flag CntDown/Up RatioPkt Size AvgFwd Seg Size AvgBwd Seg Size AvgFwd Byts/b AvgFwd Pkts/b AvgFwd Blk Rate AvgBwd Byts/b AvgBwd Pkts/b AvgBwd Blk Rate AvgSubflow Fwd PktsSubflow Fwd BytsSubflow Bwd PktsSubflow Bwd BytsInit Fwd Win BytsInit Bwd Win BytsFwd Act Data PktsFwd Seg Size MinActive MeanActive StdActive MaxActive MinIdle MeanIdle StdIdle MaxIdle MinLabel
02261.519288e+0920553406107106312977440106.3239.3574969760185.285714363.3963471.148228e+020.8271141.284588e+064.865569e+061952608012205534062.283712e+066.467165e+061952608022782332130388.666667126280.381325245143263900002121520.4865370.3405760976131.111111281.99497179521.163399000100000138.823529106.3185.28571400000010106371297146002224201027304.00.0102730410273041.952608e+070.000000e+001952608019526080Benign
13498961.519288e+097902084808480424.0599.626550000.0000000.0000001.073418e+062531.6455707.900000e+020.000000e+007907907907.900000e+020.000000e+0079079000.0000000.0000000010004002531.6455700.0000000848565.333333489.593028239701.333333010010000848.000000424.00.000000000000284800234-10200.00.0000.000000e+000.000000e+0000Benign
2500171.519288e+09997459135025000500500500.00.000000000.0000000.0000002.506368e+010.0501272.493648e+073.396804e+07755841154000203997459132.493648e+073.396804e+0775584115400020300.0000000.0000000000004000.0501270.000000500500500.0000000.0000000.000000000000000600.000000500.00.0000000000005250000-1-1484000203.00.0400020340002033.191524e+073.792787e+07755841157200679Benign
3500171.519288e+09997459135025000500500500.00.000000000.0000000.0000002.506368e+010.0501272.493648e+073.396805e+07755841304000189997459132.493648e+073.396805e+0775584130400018900.0000000.0000000000004000.0501270.000000500500500.0000000.0000000.000000000000000600.000000500.00.0000000000005250000-1-1484000189.00.0400018940001893.191524e+073.792788e+07755841307200693Benign
4500171.519288e+09894813616030000500500500.00.000000000.0000000.0000003.352654e+010.0670531.789627e+071.534519e+07419907414000554894813611.789627e+071.534519e+0741990741400055400.0000000.0000000000004800.0670530.000000500500500.0000000.0000000.000000000000000583.333333500.00.0000000000006300000-1-1584000554.00.0400055440005542.137020e+071.528109e+07419907417200848Benign
data_22_cleaned.Label.value_counts()
Benign              1042603
Brute Force -Web        249
Brute Force -XSS         79
SQL Injection            34
Name: Label, dtype: int64

Let’s define a sample that will include all different types of web attacks for this specific date.

data_sample = data_22_cleaned[-2000:]
data_sample.Label.value_counts()
Benign              1638
Brute Force -Web     249
Brute Force -XSS      79
SQL Injection         34
Name: Label, dtype: int64

Now, we will query the test dataset and save predicted and expected results to create a confusion matrix.

y_true = []
y_pred = []

BATCH_SIZE = 1000

for i in tqdm(range(0, len(data_sample), BATCH_SIZE)):
    test_data = data_sample.iloc[i:i+BATCH_SIZE, :]
    
    # Create vector embedding using the model
    test_vector = intermediate_layer_model.predict(K.constant(test_data.iloc[:, :-1]))
    
    # Query using the vector embedding
    query_results = index.query(queries=test_vector, top_k=1000)
    
    for label, res in zip(test_data.Label.values, query_results):
        # Add to the true list
        if label == 'Benign':
            y_true.append(0)
        else:
            y_true.append(1)

        counter = Counter(_id.split('_')[0] for _id in res.ids)

        # Add to the predicted list
        if counter['Bru'] or counter['SQL']:
            y_pred.append(1)
        else:
            y_pred.append(0)
# Create confusion matrix
conf_matrix = confusion_matrix(y_true, y_pred)

# Show confusion matrix
ax = plt.subplot()
sns.heatmap(conf_matrix, annot=True, ax = ax, cmap='Blues', fmt='g', cbar=False)

# Add labels, title and ticks
ax.set_xlabel('Predicted')
ax.set_ylabel('Acctual')
ax.set_title('Confusion Matrix')
ax.xaxis.set_ticklabels(['Benign', 'Attack'])
ax.yaxis.set_ticklabels(['Benign', 'Attack'])
[Text(0, 0.5, 'Benign'), Text(0, 1.5, 'Attack')]

Now we can calculate overall accuracy and per class accuracy.

# Calculate accuracy
acc = accuracy_score(y_true, y_pred, normalize=True, sample_weight=None)
precision = precision_score(y_true, y_pred)
recall = recall_score(y_true, y_pred)

print(f"Accuracy: {acc:.3f}")
print(f"Precision: {precision:.3f}")
print(f"Recall: {recall:.3f}")
Accuracy: 0.958
Precision: 0.904
Recall: 0.859
# Calculate per class accuracy
cmd = confusion_matrix(y_true, y_pred, normalize="true").diagonal()
per_class_accuracy_df = pd.DataFrame([(index, round(value,4)) for index, value in zip(['Benign', 'Attack'], cmd)], columns = ['type', 'accuracy'])
per_class_accuracy_df = per_class_accuracy_df.round(2)
display(per_class_accuracy_df)
typeaccuracy
0Benign0.98
1Attack0.86

We got great results using Pinecone! Let’s see what happens if we skip the similarity search step and predict values from the model directly. In other words, let’s use the model that created the embeddings as a classifier. It would be interesting to compare its and the similarity search approach accuracy.

from keras.utils.np_utils import normalize
import numpy as np

data_sample = normalize(data_22_cleaned.iloc[:, :-1])[-2000:]
y_pred_model = model.predict(normalize(data_sample)).flatten()
y_pred_model = np.round(y_pred_model)
# Create confusion matrix
conf_matrix = confusion_matrix(y_true, y_pred_model)

# Show confusion matrix
ax = plt.subplot()
sns.heatmap(conf_matrix, annot=True, ax = ax, cmap='Blues', fmt='g', cbar=False)

# Add labels, title and ticks
ax.set_xlabel('Predicted')
ax.set_ylabel('Acctual')
ax.set_title('Confusion Matrix')
ax.xaxis.set_ticklabels(['Benign', 'Attack'])
ax.yaxis.set_ticklabels(['Benign', 'Attack'])
[Text(0, 0.5, 'Benign'), Text(0, 1.5, 'Attack')]
# Calculate accuracy
acc = accuracy_score(y_true, y_pred_model, normalize=True, sample_weight=None)
precision = precision_score(y_true, y_pred_model)
recall = recall_score(y_true, y_pred_model)

print(f"Accuracy: {acc:.3f}")
print(f"Precision: {precision:.3f}")
print(f"Recall: {recall:.3f}")
Accuracy: 0.871
Precision: 1.000
Recall: 0.287
# Calculate per class accuracy
cmd = confusion_matrix(y_true, y_pred_model, normalize="true").diagonal()
per_class_accuracy_df = pd.DataFrame([(index, round(value,4)) for index, value in zip(['Benign', 'Attack'], cmd)], columns = ['type', 'accuracy'])
per_class_accuracy_df = per_class_accuracy_df.round(2)
display(per_class_accuracy_df)
typeaccuracy
0Benign1.00
1Attack0.29

As we can see, the direct application of our model produced much worse results. Pinecone’s similarity search over the same model’s embeddings improved our threat detection (i.e., “Attack”) accuracy by over 50%!

Result summary

Using standard vector embeddings with Pinecone’s similarity search service, we detected 85% of the attacks while keeping a low 3% false-positive rate. We also showed that our similarity search approach outperforms the direct classification approach that utilizes the classifier’s embedding model. Similarity search-based detection gained 50% higher accuracy compared to the direct detector.

Original published results for 02-22-2018 show that the model was able to correctly detect 208520 benign cases out of 208520 benign cases, and 24 (18+1+5) attacks out of 70 attacks in the test set making this model 34.3% accurate in predicting attacks. For testing purposes, 20% of the data for 02-22-2018 was used.

02-22-2018–6-15%281%29.png

As you can see, the model’s performance for creating embeddings for Pinecone was much higher.

The model we have created follows the academic paper (model for the same date (02-23-2018)) and is slightly modified, but still a straightforward, sequential, shallow model. We have changed the number of classes from four (Benign, BruteForce-Web, BruteForce-XSS, SQL-Injection) to two (Benign and Attack), only interested in whether we are detecting an attack or not.

We have also changed validation metrics to precision and recall. These changes improved our results. Yet, there is still room for further improvements, for example, by adding more data covering multiple days and different types of attacks.

Delete the Index

Delete the index once you are sure that you do not want to use it anymore. Once it is deleted, you cannot reuse it.

pinecone.delete_index(index_name)
{'success': True}