Introduction to Dynamic Time Warping
Dynamic Time Warping (DTW) is a well-known algorithm which aims at comparing and aligning two sequences of data points (a.k.a time series). Although it was originally developed for speech recognition (see ), it has also been applied to many other fields like bioinformatics, econometrics and, of course, handwriting recognition.
Consider two sequences A and B, composed respectively of n and m feature vectors.
Each feature vector is d-dimensional and can thus be represented as a point in a d-dimensional space. For example, in handwriting recognition, we could directly use the raw (x,y) coordinates of the pen movement and that would make us sequences of 2-dimensional vectors. In practice however, one would extract more useful features from (x,y) and create vectors of dimension possibly greater than 2. It’s also worth noting that the sequences A and B can be of different length.
DTW works by warping (hence the name) the time axis iteratively until an optimal match between the two sequences is found.
In the figure above, which is an example of two sequences of data points with only 1 dimension, the time axis is warped so that each data point in the green sequence is optimally aligned to a point in the blue sequence.
We can construct a n x m distance matrix. In this matrix, each cell (i,j) represents the distance between the i-th element of sequence A and the j-th element of sequence B. The distance metric used depends on the application but a common metric is the euclidean distance.
Finding the best alignment between two sequences can be seen as finding the shortest path to go from the bottom-left cell to the top-right cell of that matrix. The length of a path is simply the sum of all the cells that were visited along that path. The further away the optimal path wanders from the diagonal, the more the two sequences need to be warped to match together.
The brute force approach to finding the shortest path would be to try each path one by one and finally select the shortest one. However it’s apparent that it would result in an explosion of paths to explore, especially if the two sequences are long. To solve this problem, DTW uses two things: constraints and dynamic programming.
DTW can impose several kinds of reasonable constraints, to limit the number of paths to explore.
- Monotonicity: The alignment path doesn’t go back in time index. This guarantees that features are not repeated in the alignment.
- Continuity: The alignment doesn’t jump in time index. This guarantees that important features are not omitted.
- Boundary: The alignment starts at the bottom-left and ends at the top-right. This guarantees that the sequences are not considered only partially.
- Warping window: A good alignment path is unlikely to wander too far from the diagonal. This guarantees that the alignment doesn’t try to skip different features or get stuck at similar features.
- Shape: Aligned paths shouldn’t be too steep or too shallow. This prevents short sequences to be aligned with long ones.
These constraints are best visualized in .
Taking advantage of such constraints, DTW uses dynamic programming to find the best alignment in a recursive way. Previously, the cell (i,j) of the distance matrix was defined as “the distance between the i-th element of sequence A and the j-th element of sequence B”. In the dynamic programming way of thinking, this definition is changed, and instead, the cell (i,j) is defined as the length of the shortest path up to that cell. Assuming local constraints like below,
it allows us to define the cell (i,j) recursively:
cell(i,j) = local_distance(i,j) + MIN(cell(i-1,j), cell(i-1,j-1), cell(i, j-1))
Here, recursively means that the shortest path up to the cell (i,j) is defined in terms of the shortest path up to the adjacent cells. A lot of different local constraints can be defined (see this table) and thus there are many variations in the way DTW can be implemented.
DTW as a distance metric
Once the algorithm has reached the top-right cell, we can use backtracking in order to retrieve the best alignment. If we’re just interested in comparing the two sequences however, then the top-right cell of the matrix just happens to be the length of the shortest path. We can therefore use the value stored in this cell as the distance between the two sequences. DTW has the nice property to be symmetric so DTW(a,b) = DTW(b,a). However, DTW doesn’t fulfill the triangle inequality (but it isn’t a problem in practice).
 Sakoe, H. and Chiba, S., Dynamic programming algorithm optimization for spoken word recognition, IEEE Transactions on Acoustics, Speech and Signal Processing, 26(1) pp. 43- 49, 1978