491 lines
39 KiB
Plaintext
491 lines
39 KiB
Plaintext
The Thirty-Sixth AAAI Conference on Artificial Intelligence (AAAI-22)
|
||
|
||
LUNAR: Unifying Local Outlier Detection Methods via Graph Neural Networks
|
||
Adam Goodge1,3, Bryan Hooi 1,2, See-Kiong Ng 1,2, Wee Siong Ng 3
|
||
1 School of Computing, National University of Singapore 2 Institute of Data Science, National University of Singapore
|
||
3 Institute for Infocomm Research, A*STAR, Singapore adam.goodge@u.nus.edu, {dcsbhk, seekiong}@nus.edu.sg, wsng@i2r.a-star.edu.sg
|
||
|
||
Abstract
|
||
Many well-established anomaly detection methods use the distance of a sample to those in its local neighbourhood: socalled ‘local outlier methods’, such as LOF and DBSCAN. They are popular for their simple principles and strong performance on unstructured, feature-based data that is commonplace in many practical applications. However, they cannot learn to adapt for a particular set of data due to their lack of trainable parameters. In this paper, we begin by unifying local outlier methods by showing that they are particular cases of the more general message passing framework used in graph neural networks. This allows us to introduce learnability into local outlier methods, in the form of a neural network, for greater flexibility and expressivity: specifically, we propose LUNAR, a novel, graph neural network-based anomaly detection method. LUNAR learns to use information from the nearest neighbours of each node in a trainable way to find anomalies. We show that our method performs significantly better than existing local outlier methods, as well as state-ofthe-art deep baselines. We also show that the performance of our method is much more robust to different settings of the local neighbourhood size.
|
||
Introduction
|
||
Unsupervised anomaly detection is the task of detecting anomalies within a set of data without relying on ground truth labels of known anomalies. It is an extremely important task in a wide range of practical applications and has therefore received a great amount of research interest. As anomalies tend to be much rarer than normal data, labelled anomalies are difficult to obtain in the quantity needed to adequately train supervised techniques.
|
||
Many well-established unsupervised methods detect anomalies by measuring the distance of a point to its nearest neighbouring points: so-called local outlier methods, such as LOF and DBSCAN. These methods are very popular in practice due to their straightforward principles and assumptions, as well as their interpretable outputs. In our experiments, we also find that their performance also holds up favourably against more recent, deep learning-based methods. The latter have to fully embed knowledge about normal and abnormal regions of the data space in their network
|
||
Copyright c 2022, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
|
||
|
||
parameters. They are mostly designed for highly structured, high-dimensional data such as images, but their performance often struggles for the less structured, feature-based data that is commonly used in many applications. As such, local outlier methods remain the default choice is many areas.
|
||
A range of local outlier methods have been developed, each with its own unique formulation and properties. However, many of them also share common characteristics with each other. In this paper, our first contribution is to unify local outlier methods under a simple, general framework based on the message passing scheme used in graph neural networks (GNN). We demonstrate that many popular methods, such as KNN, LOF and DBSCAN, can be seen as particular cases of this more general message passing framework.
|
||
Despite their popularity, local outlier methods lack the capacity to learn to optimise for or adapt to a particular set of data, e.g. through trainable parameters. Furthermore, in an unsupervised setting, there is no straightforward way to find optimal hyper-parameter settings, such as the number of nearest neighbours, which is extremely important and greatly affects performance. In this paper, we also propose a novel method named LUNAR (Learnable Unified Neighbourhood-based Anomaly Ranking), which is based on the same message passing framework for local outlier methods but addresses their shortcomings by enabling learnability via graph neural networks.
|
||
In summary, we make the following contributions:
|
||
• We show that many popular local outlier methods, such as KNN, LOF and DBSCAN, can be unified under a single framework based on graph neural networks.
|
||
• We use this framework to develop a novel, GNN-based anomaly detection method (LUNAR) which is more flexible and adaptive to a given set of data than local outlier methods due to its trainable parameters.
|
||
• We show that our method gives better performance1 than popular classical methods, including local outlier methods, as well as state-of-the-art deep learning-based methods in anomaly detection. We also show that its performance is much more robust to different settings of the local neighbourhood size than local outlier methods.
|
||
1Code available at https://github.com/agoodge/LUNAR
|
||
|
||
6737
|
||
|
||
Related Work
|
||
Neighbourhood-based Anomaly Detection Besides notable examples like OC-SVM (Scho¨lkopf et al. 2001) and IFOREST (Liu, Ting, and Zhou 2008), most classical anomaly detection methods directly measure the distance of a point to its nearest neighbours to detect anomalies, which we call ‘local outlier methods’. They rely on the assumption that anomalies are in sparse regions of the data space, far away from highly dense clusters of normal points. Points that are close to their neighbours are more likely to be normal themselves, whilst points far from their neighbours are more likely to be anomalies.
|
||
KNN (Angiulli and Pizzuti 2002) uses the distance to the kth nearest neighbour as the anomaly score. Alternatively, DBSCAN (Ester et al. 1996), which simultaneously learns to to cluster normal data while also detecting outliers, uses the number of points within a pre-defined distance.
|
||
Local Outlier Factor (LOF) (Breunig et al. 2000) measures distances to define a density measure and compares this density to neighbouring points. Various extensions and variants of have been developed, including but not limited to: Local Outlier Probabilities (LOOP) (Kriegel et al. 2009a), Connectivity-based (COF) (Tang et al. 2002), Local Correlation Integral (LOCI) (Papadimitriou et al. 2003), Influenced Outlierness (INFLO) (Jin et al. 2006), and Subspace Outlier Detection (SOD) (Kriegel et al. 2009b).
|
||
These methods suffer from a lack of learnability: they do not use the information in the training set to optimise model parameters for better anomaly scoring. Instead, they are based on pre-defined heuristics and hyper-parameters. These settings strongly influence performance, yet the optimal settings are very difficult to validate before deployment without access to labelled anomalies.
|
||
Deep Learning-based Anomaly Detection Deep models have improved state-of-the-art performance in anomaly detection for highly structured, high dimensional data especially. Autoencoders are particularly popular, with the reconstruction error acting as the anomaly score. Normal samples are assumed to be reconstructed with lower error than anomalies. They have been used with fully-connected (Sakurada and Yairi 2014), convolutional (Zhao et al. 2017) or recurrent (Malhotra et al. 2015) layers for different data applications. Variational (An 2015), denoising (Feng and Han 2015) and adversarial (Vu et al. 2019) autoencoders have also been used. The reconstruction errors from each encoder-decoder layer pair are fused together in (Kim et al. 2019). The autoencoder latent encodings are optimised directly in (Goodge et al. 2020) to improve robustness against adversarial perturbations.
|
||
Others use deep models as feature extractors for a secondary anomaly-detecting module, such as KNN (Bergman, Cohen, and Hoshen 2020), KDE (Nicolau, McDermott et al. 2016), DBSCAN (Amarbayasgalan, Jargalsaikhan, and Ryu 2018) or autoregressive models (Abati et al. 2019). Zong et al. (2018) simultaneously train an autoencoder for feature extraction with a Gaussian mixture model in the latent space for anomaly detection. Ruff et al. (2018) learn a normality-encoding hypersphere in the latent space and the
|
||
|
||
anomaly score is the distance from the centre. Generative adversarial networks use the ability of the generator to generate an unseen sample to indicate its anomalousness (Schlegl et al. 2017; Zenati et al. 2018).
|
||
There has been some interest in GNNs for anomaly detection in graph data, such as sensor networks (Deng and Hooi 2021; Cai et al. 2020; Zheng et al. 2019). Our method also uses GNNs, though it is distinct from these works as it is designed for unstructured, feature-based data rather than graphs.
|
||
|
||
Background
|
||
Local Outlier Methods
|
||
‘Local outlier methods’ refers to those methods which directly use the distance of a point to its k nearest neighbours to determine its anomalousness. We now detail KNN and LOF.
|
||
KNN The anomaly score of a point xi is its distance to its kth nearest neighbour:
|
||
|
||
KNN(xi) = dist(xi, x(ik))
|
||
|
||
(1)
|
||
|
||
where x(ik) is the kth nearest neighbour of xi. Euclidean distance is most common, though any distance measure could
|
||
|
||
be used depending on its suitability to the data type.
|
||
|
||
LOF The Local Outlier Factor instead uses the ‘reachability distance’, which is defined for xi from xj as:
|
||
|
||
reachk(xi, xj) = max{k-dist(xj), dist(xi, xj)} (2)
|
||
where k-dist(xj) is equal to dist(xj, x(jk)). This is used to calculate the ‘local reachability density‘ of a point:
|
||
|
||
reachk(xi, xj) −1
|
||
|
||
lrdk (A)
|
||
|
||
:=
|
||
|
||
j∈Ni
|
||
|
||
|Ni|
|
||
|
||
|
||
|
||
(3)
|
||
|
||
where Ni is the set of k nearest neighbours of xi. Finally, this density measure is compared with that of neighbouring points to determine the local outlier factor:
|
||
|
||
LOF(xi) =
|
||
|
||
j∈Ni lrdk(xj ) . |Ni| · lrdk(xi)
|
||
|
||
(4)
|
||
|
||
Graph Neural Networks
|
||
Graph neural networks (GNN) operate on a graph G(V, E), in which a node, i ∈ V , is connected to an adjacent node, j ∈ V , via an edge (j, i) ∈ E. Edges can be undirected, in which case information flows in both directions between adjacent nodes. Alternatively, if the edges are directed, then information only flows from the source node to target node, i.e. from j to i along the edge (j, i). Nodes and edges can, but need not, have feature vectors, denoted by xi and ej,i for node i and edge (j, i) respectively.
|
||
GNNs have become increasingly popular in a range of graph-related applications, such as social networks (Fan
|
||
|
||
6738
|
||
|
||
et al. 2019) and traffic networks (Cui et al. 2019). Of particular interest here is the node classification task, which involves learning successive latent representations of nodes through the network layers in order to predict the class label of each node. This relies on a message passing scheme, made up of message, aggregation and update steps. The message function (φ) determines the information to be sent to the node in question from each neighbour. The aggregation function ( ) summarises these incoming messages into one message, for example by average or max-pooling. Finally, the update function (γ) uses this aggregated message and the current representation of the node to compute its subsequent representation. In summary, the kth layer of a GNN calculates the hidden representation of a node via the following (Gilmer et al. 2017):
|
||
|
||
h(Nki) = j∈Niφ(k)(h(ik−1), h(jk−1), ej,i),
|
||
|
||
h(ik) = γ(k)(h(ik−1), h(Nki)).
|
||
|
||
(5)
|
||
|
||
where h(i0) = xi and Ni is the set of adjacent nodes to i. h(Nki) is the aggregation of the messages from its neighbours.
|
||
|
||
Problem Definition
|
||
We now define the unsupervised anomaly detection problem of interest in this paper. We assume to have m normal training samples x(1train), ..., x(mtrain) ∈ Rd and n testing samples, x(1test), ..., x(ntest) ∈ Rd, each of which may be normal or anomalous. For a test sample x(itest), our algorithm should output an anomaly score s(x(itest)) that is low (or high) if x(itest) is normal (or anomalous).
|
||
In local outlier methods, the fundamental question is: How should the distances of a sample x(itest) to its nearest neighbours be used in computing its anomaly score?
|
||
In the following section, we show that many local outlier
|
||
methods can be seen as particular cases of the message pass-
|
||
ing framework used by GNNs.
|
||
|
||
Unifying Framework
|
||
Local outlier methods collect information from the nearest neighbouring points to compute a statistic to indicate the anomalousness of a given point. This process fits within the GNN message passing framework outlined in (5). For ease of understanding, we show this using the example of KNN in particular.
|
||
Example: KNN
|
||
Recall that KNN computes the anomaly score based on the distance to the kth nearest neighbour of a point.
|
||
In the context of message passing, each data sample corresponds to one node in a graph and node i is connected to each of its k nearest neighbours, j ∈ Ni, via a directed edge (j, i), with edge feature ej,i equal to the distance between them (k-NN graph):
|
||
|
||
ej,i =
|
||
|
||
dist(xi, xj) if j ∈ Ni. 0 otherwise.
|
||
|
||
(6)
|
||
|
||
Step
|
||
ej,i φ(1)
|
||
(1)
|
||
γ (1) φ(2)
|
||
(2)
|
||
γ (2)
|
||
|
||
KNN
|
||
dist(xi, xj) ej,i max h(N1i) -
|
||
-
|
||
|
||
LOF
|
||
reach(xi, xj) ej,i sum
|
||
1/h(N1i) h(j1)/h(i1)
|
||
mean h(N2i)
|
||
|
||
DBSCAN
|
||
dist(xi, xj) H( − ej,i)
|
||
sum H(h(N1i)− minPts)
|
||
h(j1) max 1 − h(N2i)
|
||
|
||
Table 1: Local outlier methods as they relate to the message passing framework defined in (5). H refers to the Heaviside function.
|
||
|
||
These edges are directed as j ∈ Ni =⇒ i ∈ Nj, so information flows along edge (j, i) only from the source node j to target node i. With this graph, we now show that KNN can be explained in terms of the message, aggregation and update functions in (5).
|
||
Message KNN collects the distances of a node to its nearest neighbours:
|
||
|
||
φ(1) := ej,i.
|
||
|
||
(7)
|
||
|
||
Aggregation It then outputs the maximum of these distances (i.e. max-pooling):
|
||
|
||
h(N1i)
|
||
|
||
:=
|
||
|
||
max
|
||
j∈Ni
|
||
|
||
φ(1)
|
||
|
||
(8)
|
||
|
||
Update Finally, it outputs this aggregated message as the anomaly score:
|
||
|
||
γ(1) := h(N1i)
|
||
|
||
(9)
|
||
|
||
Proposition 1. KNN is a special case of the message passing scheme formulated in (5).
|
||
|
||
Proof. The KNN anomaly score can be calculated using the message, aggregation and update functions formulated above. By substituting these functions into their appropriate counterparts in (5), we arrive at the following:
|
||
|
||
KNN(i) = max(ej,i),
|
||
|
||
(10)
|
||
|
||
j∈Ni
|
||
|
||
which is a special, one-layer case of the message passing framework in (5).
|
||
|
||
A similar analysis can be applied to LOF and DBSCAN, which are instead two-layer cases with two rounds of message passing. For example, in LOF, the first layer calculates the local reachability density as in (3) and the second layer calculates the local outlier score as in (4). Table 1 formalizes these connections, and an extended version with more local outlier methods can be found in the supplement ary material available online1.
|
||
|
||
6739
|
||
|
||
Figure 1: Contours of the scores assigned by LOF versus LUNAR. Red indicates a high (anomalous) score and blue indicates a low (normal) score. Points are marked by black crosses and those with the top 15 highest assigned scores are marked by red squares.
|
||
Motivation: The Importance of Learnability
|
||
Local outlier methods lack trainable parameters which enable them to optimise their performance for a given training set. In this section, we show that this hinders their overall accuracy. To do this, we compare the performance of LOF against our novel methodology LUNAR, on a toy training dataset of 1000 points sampled from four Gaussian distributions. As perfectly pure training sets are rare in practice, we also generate 15 points from a uniform distribution within the data bounds. These points are much rarer and sparser than the others, so they should not significantly influence the predicted normal regions.
|
||
In Figure 1, low scores (blue) indicate a predicted normal region while high scores (red) indicate a predicted anomalous region. The points with one of the top 15 anomaly scores are indicated with red squares. We test the methods with a small and large value for the hyperparameter dictating the number of nearest neighbours (k).
|
||
With low k, the LOF score is low around the four clusters, but also low far away from these clusters with little or no nearby training points. The central outlying region especially appears normal due to the strong influence of the relative sparsity of the very few points in the area. Conversely, with large k, the LOF score is erroneously high for the smaller cluster in the bottom-left corner. LOF fails to recognise the clusters existence as it contains fewer points than k, instead predicting all nearby points to be anomalous. These issues are challenging as local outlier methods lack the capacity to learn a more optimal scoring mechanism from the data directly.
|
||
In comparison, the learnability of LUNAR enables it to perform better and more robustly across k: the regions assigned with normal or anomalous scores are a much closer fit to the training data, and the highest anomaly scores are
|
||
|
||
given to the sparse, central points more accurately. We now describe its methodology in full.
|
||
|
||
LUNAR: Methodology
|
||
Overview Our methodology involves a one layer graph neural network as per the message passing framework described by (5). We represent a set of data as a graph, with a node corresponding to each data sample and directed edges connecting a target node to a set of source nodes, which are the nearest neighbours of the samples. For a given target node, the network utilises information from its its nearest neighbouring nodes to learn its anomaly score. It differs from other GNN implementations for several reasons:
|
||
• We construct the k-NN graph of any feature-based, tabular dataset, rather than being restricted to graph datasets.
|
||
• We use a node’s distances to its k nearest neighbours as input, which is more generalizable than using its feature vector.
|
||
• We use a learnable message aggregation function, whereas most GNNs use a fixed aggregation approach.
|
||
|
||
Model Design
|
||
We now describe the methodology used in LUNAR in more detail, starting with how the graph is formulated.
|
||
Nearest Neighbourhood Graph For a data sample xi, we define a target node i and edge (j, i) connecting it to a source node j for all j where xj is in the set of k nearest neighbours to xi. The edge feature vector is equal to the Euclidean distance between the two points:
|
||
|
||
ej,i =
|
||
|
||
dist(xi, xj) if j ∈ Ni. 0 otherwise.
|
||
|
||
(11)
|
||
|
||
As training samples are all assumed to be normal, we only search for nearest neighbours among training samples, so that anomalies cannot influence the neighbourhood. With this, we define the message, aggregation and update functions in (5) as follows:
|
||
|
||
Message The message passed from source node j to target node i along edge (j, i) is equal to the edge feature ej,i (i.e. the distance between the points):
|
||
|
||
φ(1) := ej,i.
|
||
|
||
(12)
|
||
|
||
Aggregation Rather than a fixed average or max-pooling, we use a learnable aggregation, which is suitable for our setting as we are dealing with node neighbourhoods of a fixed size (k). Our message aggregation involves concatenating them to give a k-dimensional vector, e(i), where each entry represents the distance of xi to its corresponding neighbour:
|
||
|
||
e(i) := [e1,i, ..., ek,i] ∈ Rk.
|
||
|
||
(13)
|
||
|
||
This vector is mapped to a single, scalar value representing the anomalousness of node i, through a neural network:
|
||
|
||
h(N1i) := F (e(i), Θ),
|
||
|
||
(14)
|
||
|
||
where Θ are the weights of the neural network F .
|
||
|
||
6740
|
||
|
||
Dataset
|
||
HRSS MI-F MI-V OPTDIGITS PENDIGITS SATELLITE SHUTTLE THYROID
|
||
|
||
#Size
|
||
90515 24955 22905 5216 6870 6435 49097 7200
|
||
|
||
#Dim
|
||
20 58 58 64 16 36 9 21
|
||
|
||
#Anomalies
|
||
10187 2050 3942 150 156 399 3511 534
|
||
|
||
Table 2: Statistics of the datasets used in experiments.
|
||
|
||
Update Finally, the update function outputs this learned, aggregated message:
|
||
|
||
γ(1) := h(N1i).
|
||
|
||
(15)
|
||
|
||
We use a loss function which trains the GNN to output a score of 0 for normal nodes and 1 for anomalous nodes. As all training points are of the normal class, the network would attain perfect training accuracy by outputting zero scores regardless of the input. To avoid this trivial solution, we generate negative samples to act as artificial anomalies, training the model to output a score of 1 for the negative sample nodes. With this, we aim to learn a decision boundary between normal samples and negative samples which generalises to the true anomalies in the test set. In the next section, we detail how negative samples are generated.
|
||
|
||
Negative Sampling
|
||
Negative samples have been used to introduce supervision to unsupervised tasks, such as in contrastive learning (Chen et al. 2020), as well as anomaly detection (Sipple 2020). They need to be sufficiently distinguishable from normal samples for the model to learn the decision boundary, but not so dissimilar that the task is too easy and the learnt boundary fails to discriminate normal samples from real anomalies. With this in mind, we combine two methods of generating negative samples, which are as follows:
|
||
|
||
Uniform The first method involves generating negative samples from a uniform distribution:
|
||
|
||
x(negative) ∼ U (−ε, 1 + ε) ∈ Rd,
|
||
|
||
(16)
|
||
|
||
where ε is a small, positive constant. For simplicity, we use = 0.1 in all experiments. The training data is normalized
|
||
to the range [0, 1], so these samples cover the data bounds. However, normal data occupies a much smaller subspace within these bounds, so many of these negative samples would be far from normal data and ineffective for learning the decision boundary. We complements this by generating an additional set of more ‘difficult‘ negative samples.
|
||
|
||
Subspace Perturbation In the second method, we generate negative samples by adding Gaussian noise to normal samples in a subset of their feature dimensions:
|
||
|
||
z ∼ N (0, I) ∈ Rd,
|
||
|
||
x(inegative) = x(itrain) + M ◦ εz.
|
||
|
||
(17)
|
||
|
||
where ε is a small, positive constant and M ∈ Rd is a vector of binary random variables. Each element in M has probability p of being one (and 1 − p of being zero), which determines the feature dimensions to be perturbed. We use p = 0.3 in all experiments.
|
||
|
||
Computational Runtime In the supplementary material, we show the runtimes of LUNAR versus other methods in experiments. We see that LUNAR is faster than the other deep methods tested (e.g. 33.71 seconds for LUNAR versus 55.92 seconds for DAGMM on the HRSS dataset). LUNAR avoids directly training on high-dimensional feature data in its input, instead using distances between points, which explains the faster training time.
|
||
|
||
Limitations A limitation of LUNAR, as with all local outlier methods, is in finding the k nearest neighbours. This is mostly an issue in very high-dimensional spaces, such as with image data, where distance measures become less meaningful (Beyer et al. 1999). Adapting LUNAR for higher dimensionality is left for future work at present.
|
||
|
||
Theoretical Properties An additional benefit of our unified approach is that we can use it to characterize theoretical properties of almost all local outlier methods in our framework (including LUNAR, KNN, LOF, and DBSCAN) in a unified way. One simple but important property of algorithms is their symmetries under transformations, which are very relevant to understanding their inductive biases, or the assumptions they use to generalize to unseen data.
|
||
Let s(x; {x(itrain)}m i=1) be the anomaly score of any local outlier method evaluated at x given training data {x(itrain)}m i=1.
|
||
Proposition 2 (Transformation Equivariance). Given any distance-preserving transformation f , the score s is transformation equivariant; that is,
|
||
|
||
s(x; {x(itrain)}m i=1) = s(f (x); {f (x(itrain))}m i=1) (18)
|
||
For example, s is equivariant to rotations, translations and reflections.
|
||
|
||
Proof. As shown in Table 1, all these methods compute distances dist(xi, xj) or reach(xi, xj) as input, and do not use the input features xi in any other way. Applying f to the training and test data does not change the (reachability) distances between them, thus also preserving the score s.
|
||
|
||
Experiments
|
||
We now conduct experiments with real datasets to answer the following research questions: RQ1 (Accuracy): Does LUNAR outperform existing baselines in detecting true anomalies? RQ2 (Robustness): Is LUNAR more robust to changes in the neighbourhood size, k, than existing local outlier
|
||
|
||
6741
|
||
|
||
Dataset
|
||
HRSS MI-F MI-V OPTDIGITS PENDIGITS SATELLITE SHUTTLE THYROID
|
||
|
||
IFOREST
|
||
59.61 84.24 84.28 79.34 96.70 80.10 99.64 76.30
|
||
|
||
OC-SVM
|
||
61.03 78.65 74.56 59.84 94.08 64.64 98.29 52.81
|
||
|
||
LOF
|
||
60.13 63.07 79.14 99.53 98.18 84.25 99.80 68.67
|
||
|
||
KNN
|
||
62.09 78.08 82.71 96.57 98.42 86.07 99.56 63.01
|
||
|
||
AE
|
||
61.16 71.53 82.42 97.46 96.42 81.48 99.26 64.34
|
||
|
||
VAE
|
||
63.30 78.63 75.96 86.71 94.76 66.09 98.33 51.54
|
||
|
||
DAGMM
|
||
55.93 81.45 78.19 75.56 95.98 78.22 99.51 70.91
|
||
|
||
SO-GAAL
|
||
45.90 32.07 55.34 74.35 94.65 84.16 99.38 60.13
|
||
|
||
DN2
|
||
60.20 77.26 62.54 34.98 85.30 75.37 96.97 58.09
|
||
|
||
LUNAR
|
||
92.17** 84.37 96.73** 99.76 99.81** 85.35 99.97** 85.44**
|
||
|
||
Table 3: AUC Score for each method on each dataset. Best scores are highlighted in bold. Average scores marked by ** are greater than the next best performing method with significance level p < 0.01, according to the t-test. More significance test results are found in the supplementary material.
|
||
|
||
methods? RQ3 (Ablation Study): How do variations in our methodology affected its performance?
|
||
Datasets
|
||
Each dataset used in our experiments is publicly available and consists of a normal (0) class and anomaly (1) class. Table 2 summarises them and their key statistics.
|
||
As we focus on the unsupervised case, in which the training set only consists of samples labelled as normal (all labelled anomalies are in the test set). We use Area-UnderCurve (AUC) to measure performance. The relative proportion of anomalies in the test set does not affect the scoring of any individual point, so we can choose to randomly subsample normal points to achieve a 50:50 normal:anomaly ratio in the test set. Of the remaining normal samples, they are split 85:15 into a training set and validation set. We randomly generate both ‘Uniform’ and ‘Subspace Perturbation’ negative samples for the training and validation sets separately to avoid leaking information. We use a 1:1 ratio of negative:normal samples in both sets for all experiments.
|
||
Training Procedure
|
||
The neural network, F in (14), consists of four fully connected hidden layers all of size 256. All layers used tanh activation except for the sigmoid function at the output layer. We used mean squared error as the loss function and Adam (Kingma and Ba 2014) for optimization with a learning rate of 0.001 and weight decay of 0.1.
|
||
We trained the model for 200 epochs and used the model parameters with the best validation score as the final model. It was implemented using PyTorch Geometric on Windows OS and a Nvidia GeForce RTX 2080 Ti GPU.
|
||
Baselines
|
||
We use the PyOD library (Zhao, Nasrullah, and Li 2019) implementations of IFOREST, OC-SVM, LOF, KNN, and the GAN-based SO-GAAL (Liu et al. 2019). We also implement a deep autoencoder (AE) and VAE built in Pytorch, and DAGMM as in (Zong et al. 2018) with publicly available codes. Finally, DN2 (Bergman, Cohen, and Hoshen 2020), which performs KNN with latent features learnt from
|
||
|
||
a deep, pre-trained feature extractor. As we are interested in tabular data rather than image data, unlike the original paper, we use an autoencoder (the same model as in AE) for feature extraction.
|
||
RQ1 (Accuracy):
|
||
Table 3 shows the AUC score (multiplied by 100) of each method for each dataset. We use AUC as it does not rely on a user-defined score threshold to predict normal or anomalous labels. The scores shown are the average over five repeated trials with different random seeds. For the methods that use it, all results are with k = 100 as the number of neighbours unless stated otherwise.
|
||
We see that LUNAR gives the best performance on all datasets except SATELLITE, for which KNN is slightly better. For the HRSS, MI-V and THYROID datasets in particular, our method performs substantially better than the baselines: between 10 and 30 percentage points better than the second best method. Our scores marked by ** are significantly better than the second best performing method for each dataset with significance value p < 0.01 according to the t-test.
|
||
RQ2 (Robustness to Neighbourhood Size):
|
||
LOF, KNN and DN2 also use the k nearest neighbours of a point to determine its anomalousness. In Table 4, we show the performance of these methods for various k. We see that these methods depend greatly on the value of k. For example, their score decreases by 26, 24 and 25 percentage points respectively for HRSS as k increases from 2 to 200. In stark contrast, LUNAR only drops in performance by 3 points in the same range. LUNAR gives the best performance in the vast majority of datasets and k settings. Our method not only performs better, but maintains stronger performance for different settings of k. This is because it is able to learn to use the information from all k neighbours effectively, whereas the other methods lose information from most neighbours, as decided by a pre-set aggregation rule.
|
||
RQ3 (Ablation Study):
|
||
Table 5 shows the performance with Subspace Perturbation (SP) and Uniform (U) negative samples individually. SP negative samples give better performance than U samples
|
||
|
||
6742
|
||
|
||
k LOF KNN DN2 LUNAR LOF KNN DN2 LUNAR LOF KNN DN2 LUNAR
|
||
|
||
HRSS
|
||
|
||
MI-F
|
||
|
||
MI-V
|
||
|
||
2 82.08 86.25 85.28 10 67.98 65.53 62.40 50 61.66 62.71 60.46 100 60.13 62.09 60.20 150 57.22 61.81 60.14 200 55.59 61.86 60.22
|
||
|
||
93.88 92.67 92.21 92.17 91.61 90.09
|
||
|
||
90.43 86.41 67.17 63.07 60.60 70.89
|
||
|
||
77.84 73.46 71.69 78.08 80.79 82.85
|
||
|
||
91.13 85.58 78.66 77.26 76.33 75.93
|
||
|
||
81.50 82.39 83.58 84.37 82.82 84.47
|
||
|
||
94.31 92.60 78.61 79.14 80.73 81.75
|
||
|
||
94.58 88.53 83.29 82.71 82.86 82.65
|
||
|
||
86.76 77.92 64.96 62.54 61.77 61.67
|
||
|
||
96.06 96.09 96.38 96.73 96.53 96.30
|
||
|
||
Avg. 64.11 67.10 64.79 92.11 73.10 77.45 80.82 83.19 84.52 85.77 69.27 96.35
|
||
|
||
OPTDIGITS
|
||
|
||
PENDIGITS
|
||
|
||
SATELLITE
|
||
|
||
2 99.58 99.91 50.90 10 99.92 99.63 45.84 50 99.72 98.41 39.23 100 99.53 96.57 34.98 150 99.11 94.85 33.10 200 98.63 93.13 32.14
|
||
|
||
99.91 99.79 99.81 99.76 99.73 99.78
|
||
|
||
99.37 99.67 98.79 98.18 97.58 97.19
|
||
|
||
99.84 99.77 98.79 98.42 98.07 97.52
|
||
|
||
81.08 80.74 81.83 85.30 86.39 85.49
|
||
|
||
99.84 99.82 99.80 99.81 99.76 99.71
|
||
|
||
85.05 85.38 83.44 84.25 84.86 85.21
|
||
|
||
87.72 86.77 86.07 86.07 85.85 85.46
|
||
|
||
80.16 79.43 76.52 75.37 74.48 73.39
|
||
|
||
87.80 87.83 87.58 85.35 83.95 84.70
|
||
|
||
Avg. 99.41 97.09 39.37 99.79 98.46 98.74 83.47 99.79 84.69 86.32 76.55 86.08
|
||
|
||
k LOF KNN DN2 LUNAR LOF KNN DN2 LUNAR
|
||
|
||
SHUTTLE
|
||
|
||
THYROID
|
||
|
||
2 99.64 10 99.91 50 99.74 100 99.80 150 99.80 200 99.69
|
||
|
||
99.98 99.93 99.68 99.56 99.43 99.32
|
||
|
||
98.94 98.22 97.19 96.97 96.68 96.45
|
||
|
||
99.98 99.95 99.97 99.97 99.95 99.97
|
||
|
||
83.70 83.69 74.41 68.67 67.20 66.58
|
||
|
||
80.28 73.87 66.49 63.01 62.26 61.24
|
||
|
||
64.09 62.71 59.88 58.09 56.86 56.26
|
||
|
||
83.38 84.24 86.01 85.44 86.08 86.67
|
||
|
||
Avg. 99.76 99.65 97.41 99.96 74.04 67.86 59.65 85.31
|
||
|
||
Table 4: AUC Score of LOF, KNN, DN2 and LUNAR for different values of k and the Avg. over all k. Best performance for each is highlighted in bold.
|
||
|
||
Dataset
|
||
HRSS MI-F MI-V OPTDIGITS PENDIGITS SATELLITE SHUTTLE THYROID
|
||
|
||
Negative sampling scheme SP U Mixed
|
||
|
||
93.32 84.17 96.64 93.81 99.78 85.37 99.96 85.99
|
||
|
||
66.34 57.76 67.99 99.86 99.82 85.12 99.54 45.42
|
||
|
||
92.17 84.37 96.73 99.76 99.81 85.35 99.97 85.44
|
||
|
||
Table 5: AUC scores for different negative sample types.
|
||
|
||
except for the OPTDIGITS dataset, in which SP samples alone gives poor performance for small values of k. Overall, mixing both types gives the best performance in most cases.
|
||
Further ablation studies relating to the neural network size and depth can be found in the supplementary material. Overall, we find that deeper and wider networks for message aggregation give the best performance.
|
||
|
||
Conclusion
|
||
We have studied local outlier methods, some of the most well-established and popular anomaly detection methods in practice, which use the distance of data samples to their nearest neighbours to detect anomalies. We provided a unifying framework which shows that many local outlier methods seen as particular cases of the message passing scheme used in graph neural networks.
|
||
We then proposed LUNAR, which is based on this shared framework but is also able to learn and adapt to different sets of data by using a graph neural network. We show that our method significantly outperforms the baselines, including other deep learning-based methods, on a wide variety of datasets. Our method also maintains its strong performance for different neighbourhood sizes much better than other local outlier methods, as it is unique in its ability to learn from all incoming information from the neighbours.
|
||
Acknowledgements
|
||
This work was supported in part by NUS ODPRT Grant R252-000-A81-133.
|
||
|
||
6743
|
||
|
||
References
|
||
Abati, D.; Porrello, A.; Calderara, S.; and Cucchiara, R. 2019. Latent space autoregression for novelty detection. In ICCV, 481–490.
|
||
Amarbayasgalan, T.; Jargalsaikhan, B.; and Ryu, K. H. 2018. Unsupervised novelty detection using deep autoencoders with density based clustering. Applied Sciences, 8(9): 1468.
|
||
An, J. 2015. Variational Autoencoder based Anomaly Detection using Reconstruction Probability. In SNU Data Mining Center 2015-2 Special Lecture on IE.
|
||
Angiulli, F.; and Pizzuti, C. 2002. Fast outlier detection in high dimensional spaces. In European conference on principles of data mining and knowledge discovery, 15–27. Springer.
|
||
Bergman, L.; Cohen, N.; and Hoshen, Y. 2020. Deep nearest neighbor anomaly detection. arXiv preprint arXiv:2002.10445.
|
||
Beyer, K.; Goldstein, J.; Ramakrishnan, R.; and Shaft, U. 1999. When is “nearest neighbor” meaningful? In International conference on database theory, 217–235. Springer.
|
||
Breunig, M. M.; Kriegel, H.-P.; Ng, R. T.; and Sander, J. 2000. LOF: identifying density-based local outliers. In Proceedings of the 2000 ACM SIGMOD international conference on Management of data, 93–104.
|
||
Cai, L.; Chen, Z.; Luo, C.; Gui, J.; Ni, J.; Li, D.; and Chen, H. 2020. Structural temporal graph neural networks for anomaly detection in dynamic graphs. arXiv preprint arXiv:2005.07427.
|
||
Chen, T.; Kornblith, S.; Norouzi, M.; and Hinton, G. 2020. A simple framework for contrastive learning of visual representations. In International conference on machine learning, 1597–1607. PMLR.
|
||
Cui, Z.; Henrickson, K.; Ke, R.; and Wang, Y. 2019. Traffic graph convolutional recurrent neural network: A deep learning framework for network-scale traffic learning and forecasting. IEEE Transactions on Intelligent Transportation Systems, 21(11): 4883–4894.
|
||
Deng, A.; and Hooi, B. 2021. Graph neural network-based anomaly detection in multivariate time series. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, 4027–4035.
|
||
Ester, M.; Kriegel, H.-P.; Sander, J.; Xu, X.; et al. 1996. A density-based algorithm for discovering clusters in large spatial databases with noise. In kdd, volume 96, 226–231.
|
||
Fan, W.; Ma, Y.; Li, Q.; He, Y.; Zhao, E.; Tang, J.; and Yin, D. 2019. Graph neural networks for social recommendation. In The World Wide Web Conference, 417–426.
|
||
Feng, W.; and Han, C. 2015. A Novel Approach for Trajectory Feature Representation and Anomalous Trajectory Detection. ISIF, 1093–1099.
|
||
Gilmer, J.; Schoenholz, S. S.; Riley, P. F.; Vinyals, O.; and Dahl, G. E. 2017. Neural message passing for quantum chemistry. In International conference on machine learning, 1263–1272. PMLR.
|
||
|
||
Goodge, A.; Hooi, B.; Ng, S. K.; and Ng, W. S. 2020. Robustness of Autoencoders for Anomaly Detection Under Adversarial Impact. IJCAI.
|
||
Jin, W.; Tung, A. K. H.; Han, J.; and Wang, W. 2006. Ranking outliers using symmetric neighborhood relationship. In PAKDD, 577–593. Springer.
|
||
Kim, K. H.; Shim, S.; Lim, Y.; Jeon, J.; Choi, J.; Kim, B.; and Yoon, A. S. 2019. RaPP: Novelty Detection with Reconstruction along Projection Pathway. In ICLR.
|
||
Kingma, D. P.; and Ba, J. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.
|
||
Kriegel, H.-P.; Kro¨ger, P.; Schubert, E.; and Zimek, A. 2009a. LoOP: local outlier probabilities. In Proceedings of the 18th ACM conference on Information and knowledge management, 1649–1652.
|
||
Kriegel, H.-P.; Kro¨ger, P.; Schubert, E.; and Zimek, A. 2009b. Outlier detection in axis-parallel subspaces of high dimensional data. In Pacific-asia conference on knowledge discovery and data mining, 831–838. Springer.
|
||
Liu, F. T.; Ting, K. M.; and Zhou, Z.-H. 2008. Isolation forest. In 2008 eighth ieee international conference on data mining, 413–422. IEEE.
|
||
Liu, Y.; Li, Z.; Zhou, C.; Jiang, Y.; Sun, J.; Wang, M.; and He, X. 2019. Generative adversarial active learning for unsupervised outlier detection. IEEE Transactions on Knowledge and Data Engineering, 32(8): 1517–1528.
|
||
Malhotra, P.; Vig, L.; Shroff, G.; and Agarwal, P. 2015. Long short term memory networks for anomaly detection in time series. In ESANN, 89–94. Presses universitaires de Louvain. ISBN 2875870157.
|
||
Nicolau, M.; McDermott, J.; et al. 2016. A hybrid autoencoder and density estimation model for anomaly detection. In International Conference on Parallel Problem Solving from Nature, 717–726. Springer.
|
||
Papadimitriou, S.; Kitagawa, H.; Gibbons, P. B.; and Faloutsos, C. 2003. Loci: Fast outlier detection using the local correlation integral. In Proceedings 19th international conference on data engineering (Cat. No. 03CH37405), 315–326. IEEE.
|
||
Ruff, L.; Vandermeulen, R.; Goernitz, N.; Deecke, L.; Siddiqui, S. A.; Binder, A.; Mu¨ller, E.; and Kloft, M. 2018. Deep one-class classification. In ICML, 4393–4402.
|
||
Sakurada, M.; and Yairi, T. 2014. Anomaly detection using autoencoders with nonlinear dimensionality reduction. In MLSDA, 4. ACM. ISBN 1450331599.
|
||
Schlegl, T.; Seebo¨ck, P.; Waldstein, S. M.; Schmidt-Erfurth, U.; and Langs, G. 2017. Unsupervised anomaly detection with generative adversarial networks to guide marker discovery. In IPMI, 146–157. Springer.
|
||
Scho¨lkopf, B.; Platt, J. C.; Shawe-Taylor, J.; Smola, A. J.; and Williamson, R. C. 2001. Estimating the support of a high-dimensional distribution. Neural computation, 13(7): 1443–1471.
|
||
Sipple, J. 2020. Interpretable, multidimensional, multimodal anomaly detection with negative sampling for detection of
|
||
|
||
6744
|
||
|
||
device failure. In International Conference on Machine Learning, 9016–9025. PMLR. Tang, J.; Chen, Z.; Fu, A. W.-C.; and Cheung, D. W. 2002. Enhancing effectiveness of outlier detections for low density patterns. In Pacific-Asia Conference on Knowledge Discovery and Data Mining, 535–548. Springer. Vu, H. S.; Ueta, D.; Hashimoto, K.; Maeno, K.; Pranata, S.; and Shen, S. M. 2019. Anomaly Detection with Adversarial Dual Autoencoders. arXiv preprint arXiv:1902.06924. Zenati, H.; Foo, C. S.; Lecouat, B.; Manek, G.; and Chandrasekhar, V. R. 2018. Efficient gan-based anomaly detection. arXiv preprint arXiv:1802.06222. Zhao, Y.; Deng, B.; Shen, C.; Liu, Y.; Lu, H.; and Hua, X.S. 2017. Spatio-temporal autoencoder for video anomaly detection. In ACM Multimedia, 1933–1941. Zhao, Y.; Nasrullah, Z.; and Li, Z. 2019. PyOD: A Python Toolbox for Scalable Outlier Detection. Journal of Machine Learning Research, 20(96): 1–7. Zheng, L.; Li, Z.; Li, J.; Li, Z.; and Gao, J. 2019. AddGraph: Anomaly Detection in Dynamic Graph Using Attentionbased Temporal GCN. In IJCAI, 4419–4425. Zong, B.; Song, Q.; Min, M. R.; Cheng, W.; Lumezanu, C.; Cho, D.; and Chen, H. 2018. Deep autoencoding gaussian mixture model for unsupervised anomaly detection. In International conference on learning representations.
|
||
6745
|
||
|