R:通过聚合OHLC系列中的值来降低时间序列数据的频率


R: Decrease frequency of time series data by aggregating values in OHLC series

我有一个高频数据集,用于低至毫秒的外汇汇率,我想将其转换为R中较低频率和常规时间序列数据,例如: 每分钟或5分钟的OHLC系列(开放,高,低,关闭)。 原始数据集有四列,一列用于汇率,一列用于时间戳,包括日期和时间以及出价和询价的列。 数据已从.csv文件导入。

{head(GBPUSD)}{tail(GBPUSD)}返回以下内容:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
# A tibble: 6 x 4
       X1                  X2      X3      X4
    <chr>              <dttm>   <dbl>   <dbl>  
1 GBP/USD 2017-06-01 00:00:00 1.28756 1.28763  
2 GBP/USD 2017-06-01 00:00:00 1.28754 1.28760  
3 GBP/USD 2017-06-01 00:00:00 1.28754 1.28759  
4 GBP/USD 2017-06-01 00:00:00 1.28753 1.28759  
5 GBP/USD 2017-06-01 00:00:00 1.28753 1.28759  
6 GBP/USD 2017-06-01 00:00:00 1.28753 1.28759


# A tibble: 6 x 4
       X1                  X2      X3      X4
    <chr>              <dttm>   <dbl>   <dbl>
1 GBP/USD 2017-06-30 20:59:56 1.30093 1.30300  
2 GBP/USD 2017-06-30 20:59:56 1.30121 1.30300  
3 GBP/USD 2017-06-30 20:59:56 1.30100 1.30390  
4 GBP/USD 2017-06-30 20:59:56 1.30146 1.30452  
5 GBP/USD 2017-06-30 20:59:56 1.30145 1.30447  
6 GBP/USD 2017-06-30 20:59:56 1.30145 1.30447


您似乎希望将每列(出价,询问)转换为4列(开放,高,低,关闭),按照一些时间间隔(如5分钟)进行分组。我很欣赏@ dmi3kno展示一些tibbletime功能,但我认为这可能会更多地满足您的需求。

请注意,这会在tibbletime的下一个版本中稍微改变一下,但目前在0.0.2下会有效。

对于每5分钟的时间段,将采用买入和卖出列的开盘价/最高价/最低价/收盘价。


1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
library(tibbletime)
library(dplyr)

df <- create_series("2017-12-20 00:00:00" ~"2017-12-20 01:00:00","sec") %>%
  mutate(bid = runif(nrow(.)),
         ask = bid + .0001)
df
#> # A time tibble: 3,601 x 3
#> # Index: date
#>    date                   bid    ask
#>  * <dttm>               <dbl>  <dbl>
#>  1 2017-12-20 00:00:00 0.208  0.208
#>  2 2017-12-20 00:00:01 0.0629 0.0630
#>  3 2017-12-20 00:00:02 0.505  0.505
#>  4 2017-12-20 00:00:03 0.0841 0.0842
#>  5 2017-12-20 00:00:04 0.986  0.987
#>  6 2017-12-20 00:00:05 0.225  0.225
#>  7 2017-12-20 00:00:06 0.536  0.536
#>  8 2017-12-20 00:00:07 0.767  0.767
#>  9 2017-12-20 00:00:08 0.994  0.994
#> 10 2017-12-20 00:00:09 0.807  0.808
#> # ... with 3,591 more rows

df %>%
  mutate(date = collapse_index(date,"5 min")) %>%
  group_by(date) %>%
  summarise_all(
    .funs = funs(
      open  = dplyr::first(.),
      high  = max(.),
      low   = min(.),
      close = dplyr::last(.)
    )
  )
#> # A time tibble: 13 x 9
#> # Index: date
#>    date                bid_o… ask_o… bid_h… ask_h…  bid_low ask_low bid_c…
#>  * <dttm>               <dbl>  <dbl>  <dbl>  <dbl>    <dbl>   <dbl>  <dbl>
#>  1 2017-12-20 00:04:59  0.208  0.208  1.000  1.000 0.00293  3.03e?3 0.389
#>  2 2017-12-20 00:09:59  0.772  0.772  0.997  0.997 0.000115 2.15e?? 0.676
#>  3 2017-12-20 00:14:59  0.457  0.457  0.995  0.996 0.00522  5.32e?3 0.363
#>  4 2017-12-20 00:19:59  0.586  0.586  0.997  0.997 0.00912  9.22e?3 0.0339
#>  5 2017-12-20 00:24:59  0.385  0.385  0.998  0.998 0.0131   1.32e?2 0.0907
#>  6 2017-12-20 00:29:59  0.548  0.548  0.996  0.996 0.00126  1.36e?3 0.320
#>  7 2017-12-20 00:34:59  0.240  0.240  0.995  0.995 0.00466  4.76e?3 0.153
#>  8 2017-12-20 00:39:59  0.404  0.405  0.999  0.999 0.000481 5.81e?? 0.709
#>  9 2017-12-20 00:44:59  0.468  0.468  0.999  0.999 0.00101  1.11e?3 0.0716
#> 10 2017-12-20 00:49:59  0.580  0.580  0.996  0.996 0.000336 4.36e?? 0.395
#> 11 2017-12-20 00:54:59  0.242  0.242  0.999  0.999 0.00111  1.21e?3 0.762
#> 12 2017-12-20 00:59:59  0.474  0.474  0.987  0.987 0.000858 9.58e?? 0.335
#> 13 2017-12-20 01:00:00  0.974  0.974  0.974  0.974 0.974    9.74e?1 0.974
#> # ... with 1 more variable: ask_close <dbl>

更新:帖子已更新,以反映tibbletime 0.1.0中的更改。


当你想尝试很棒的tibbletime包时,这是非常完美的例子。我将生成自己的数据来说明问题

1
2
3
4
5
library(tibbletime)
df <- tibbletime::create_series(2017-12-20 + 01:06:00 ~ 2017-12-20 + 01:20:00,"sec") %>%
         mutate(open=runif(nrow(.)),
                close=runif(nrow(.)))
df

这是一个15分钟的秒分辨率数据

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
# A time tibble: 841 x 3
# Index: date
                  date       open       close
 *              <dttm>      <dbl>       <dbl>
 1 2017-12-20 01:06:00 0.63328803 0.357378011
 2 2017-12-20 01:06:01 0.09597444 0.150583962
 3 2017-12-20 01:06:02 0.23601820 0.974341599
 4 2017-12-20 01:06:03 0.71832656 0.092265867
 5 2017-12-20 01:06:04 0.32471587 0.391190310
 6 2017-12-20 01:06:05 0.76378711 0.534765217
 7 2017-12-20 01:06:06 0.92463265 0.694693458
 8 2017-12-20 01:06:07 0.74026638 0.006054806
 9 2017-12-20 01:06:08 0.77064030 0.911641146
10 2017-12-20 01:06:09 0.87130949 0.740816479
# ... with 831 more rows

更改数据的周期就像一个命令一样简单:

1
as_period(df, 5~M)

这将汇总数据到5分钟的间隔(tibbletime默认选择每个时段的第一次观察,而不是平均值或总和)

1
2
3
4
5
6
7
# A time tibble: 3 x 3
# Index: date
                 date      open     close
*              <dttm>     <dbl>     <dbl>
1 2017-12-20 01:06:00 0.6332880 0.3573780
2 2017-12-20 01:11:00 0.9235639 0.7043025
3 2017-12-20 01:16:00 0.6955685 0.1641798

查看这个令人敬畏的小插图了解更多详情


我认为使用aggregate功能会更容易。但是,根据数据,您可能需要将datetime列转换为字符(如果原始数据保存毫秒值)。如果需要,我建议使用lubridate将它们转换回日期时间。

1
2
3
4
5
6
7
8
GBPUSD$X2 <- as.character(GBPUSD$X2) #optional; if the below yields bad results
GBPUSD$X2 <- substr(GBPUSD$X2, 1, 19) #optional; to get only upto minutes after above command
# get High values for both bid and ask prices:
GBPUSD_H <- aggregate(cbind(X3, X4)~X1+X2, data=GBPUSD, FUN=max)
# get Low values for both bid and ask prices:
GBPUSD_L <- aggregate(cbind(X3, X4)~X1+X2, data=GBPUSD, FUN=min)
# merging the High and low values together
GBPUSD_NEW <- data.table::merge(GBPUSD_H, GBPUSD_L, by=c("X1","X2"), suffixes=c(".HIGH",".LOW"))

获得所有高,低,开,和一次性关闭值:

1
2
3
4
GBPUSD <- data.table(GBPUSD, key=c("X1","X2"))
GBPUSD_NEW <- GBPUSD[, list(X3.HIGH=max(X3), X3.LOW=min(X3), X3.OPEN=X3[1],
                            X3.CLOSE=X3[length(X3)], X4.HIGH=max(X4), X4.LOW=min(X4),
                            X4.OPEN=X4[1], X4.CLOSE=X4[length(X4)]), by=c("X1","X2")]

但是,要使其正常工作,首先需要对数据进行排序,以使第一个值为open,最后一个值为每秒的close值。

现在,如果您需要使用分钟而不是秒(或小时),请相应地调整substr。如果你想要更多的自定义,比如15分钟的间隔,我会建议添加一个帮助列。
示例代码:

1
2
GBPUSD$MIN <- floor(as.numeric(substr(GBPUSD$X2, 15, 16))/15) #getting 00:00 for 00:00-00:15
GBPUSD$X2 <- paste0(substr(GBPUSD$X2, 1, 14), GBPUSD$MIN,":00")

如果您的要求未得到满足,请随时询问。

P.S。:NAaggregate中创建问题,如果关键列有它们。先处理它们。

1
GBPUSD$X2[is.na(GBPUSD$X2)] <-"2017:05:05 00:00:00" #example; you need to be careful to use same class and format for the replacement

由于下面的教学/教学原因,我改变了OP的原始数据集:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
df <- data.frame(
X1=c("GBP/USD"),
X2=c("2017-06-01 00:00:00","2017-06-01 00:00:00","2017-06-01 00:00:01","2017-06-01 00:00:01","2017-06-01 00:00:01","2017-06-01 00:00:02","2017-06-30 20:59:52","2017-06-30 20:59:54","2017-06-30 20:59:54","2017-06-30 20:59:56","2017-06-30 20:59:56","2017-06-30 20:59:56"),
X3=c(1.28756, 1.28754, 1.28754, 1.28753, 1.28752, 1.28757, 1.30093, 1.30121, 1.30100, 1.30146, 1.30145,1.30145),
X4=c(1.28763, 1.28760, 1.28759, 1.28758, 1.28755, 1.28760,1.30300, 1.30300, 1.30390, 1.30452, 1.30447, 1.30447),
stringsAsFactors=FALSE)

df

        X1                  X2      X3      X4
1  GBP/USD 2017-06-01 00:00:00 1.28756 1.28763
2  GBP/USD 2017-06-01 00:00:00 1.28754 1.28760
3  GBP/USD 2017-06-01 00:00:01 1.28754 1.28759
4  GBP/USD 2017-06-01 00:00:01 1.28753 1.28758
5  GBP/USD 2017-06-01 00:00:01 1.28752 1.28755
6  GBP/USD 2017-06-01 00:00:02 1.28757 1.28760
7  GBP/USD 2017-06-30 20:59:52 1.30093 1.30300
8  GBP/USD 2017-06-30 20:59:54 1.30121 1.30300
9  GBP/USD 2017-06-30 20:59:54 1.30100 1.30390
10 GBP/USD 2017-06-30 20:59:56 1.30146 1.30452
11 GBP/USD 2017-06-30 20:59:56 1.30145 1.30447
12 GBP/USD 2017-06-30 20:59:56 1.30145 1.30447

现在,在低频数据中,将存在相同事物的分组。因此,我们必须找到与唯一起点相对应的指数,以及各组的结局:

1
2
3
indices <- seq_along(df[,2])[!(duplicated(df[,2]))] # 1  3  6  7  8 10; the beginnings of groups (observations)
indices - 1   # 0  2  5  6  7   9; for finding the endings of groups
numberoflowfreq <- length(indices) # 6: number of groupings (obs.) for Low Freq data

通过公开写作来理解模式:

1
2
3
4
5
6
mean(df[1:((indices -1)[2]),3]) # from 1 to 2
mean(df[indices[2]:((indices -1)[3]),3]) # from 3 to 5
mean(df[indices[3]:((indices -1)[4]),3]) # from 6 to 6
mean(df[indices[4]:((indices -1)[5]),3]) # from 7 to 7
mean(df[indices[5]:((indices -1)[6]),3]) # from 8 to 9
mean(df[indices[6]:nrow(df),3]) # from 10 to 12

简化模式:

1
2
3
4
5
mean3rdColumn_1st <- mean(df[1:((indices -1)[2]),3]) # from 1 to 2
mean3rdColumn_Between <- sapply(2:(numberoflowfreq-1), function(i)  mean(df[indices[i]:((indices -1)[i+1]),3]) )
mean3rdColumn_Last <- mean(df[indices[6]:nrow(df),3]) # from 10 to 12
# 3rd column in low frequency data:    
c(mean3rdColumn_1st, mean3rdColumn_Between, mean3rdColumn_Last)

同样对于第4列:

1
2
3
4
5
mean4thColumn_1st <- mean(df[1:((indices -1)[2]),4]) # from 1 to 2
mean4thColumn_Between <- sapply(2:(numberoflowfreq-1), function(i)  mean(df[indices[i]:((indices -1)[i+1]),4]) )
mean4thColumn_Last <- mean(df[indices[6]:nrow(df),4]) # from 10 to 12
# 4th column in low frequency data:
c(mean4thColumn_1st, mean4thColumn_Between, mean4thColumn_Last)

收集所有努力:

1
2
3
4
5
6
7
8
9
10
LowFrqData <- data.frame(X1=c("GBP/USD"), X2=df[indices,2], X3=c(mean3rdColumn_1st, mean3rdColumn_Between, mean3rdColumn_Last),   x4=c(mean4thColumn_1st, mean4thColumn_Between, mean4thColumn_Last), stringsAsFactors=FALSE)
LowFrqData

       X1                  X2       X3       x4
1 GBP/USD 2017-06-01 00:00:00 1.287550 1.287615
2 GBP/USD 2017-06-01 00:00:01 1.287530 1.287573
3 GBP/USD 2017-06-01 00:00:02 1.287570 1.287600
4 GBP/USD 2017-06-30 20:59:52 1.300930 1.303000
5 GBP/USD 2017-06-30 20:59:54 1.301105 1.303450
6 GBP/USD 2017-06-30 20:59:56 1.301453 1.304487

现在,列X2具有唯一的分钟值,X3X4是通过相关单元格形成的。

另请注意:某个范围内的所有分钟可能没有值。对于这种情况,可以泵送NA s。另一方面,人们可能会忽略这种情况下的不规则性的影响,因为观察的间隔对于许多观察来说将是/可能是相同的,因此不是那么高度不规则。还要考虑这样一个事实,即使用线性插值将数据转换为等间距观测可能会引入许多重要且难以量化的偏差(参见:Scholes和Williams)。

M. Scholes和J. Williams,"从非同步数据估计贝塔","金融经济学杂志"5:309-327,1977。

现在,常规的5分钟系列部分:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
as.numeric(as.POSIXct("1970-01-01 03:00:00"))  # 0; starting point for ZERO seconds."1970-01-01 03:01:00" equals 60.
as.numeric(as.POSIXct("2017-06-01 00:00:00")) # 1496264400
# Passed seconds after the first observation in the dataset
PassedSecs <- as.numeric(as.POSIXct(LowFrqData$X2)) - 1496264400

LowFrq5minuteRaw <- cbind(LowFrqData, PassedSecs, stringsAsFactors=FALSE)
LowFrq5minuteRaw

       X1                  X2       X3       x4 PassedSecs
1 GBP/USD 2017-06-01 00:00:00 1.287550 1.287615          0
2 GBP/USD 2017-06-01 00:00:01 1.287530 1.287573          1
3 GBP/USD 2017-06-01 00:00:02 1.287570 1.287600          2
4 GBP/USD 2017-06-30 20:59:52 1.300930 1.303000    2581192
5 GBP/USD 2017-06-30 20:59:54 1.301105 1.303450    2581194
6 GBP/USD 2017-06-30 20:59:56 1.301453 1.304487    2581196

5分钟意味着5 * 60 = 300秒。因此,"在300分区中具有相同的商数"将观察以5分钟的间隔分组。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
LowFrq5minuteRaw2 <- cbind(LowFrqData, PassedSecs, QbyDto300 = PassedSecs%/%300, stringsAsFactors=FALSE)
LowFrq5minuteRaw2

       X1                  X2       X3       x4 PassedSecs QbyDto300
1 GBP/USD 2017-06-01 00:00:00 1.287550 1.287615          0         0
2 GBP/USD 2017-06-01 00:00:01 1.287530 1.287573          1         0
3 GBP/USD 2017-06-01 00:00:02 1.287570 1.287600          2         0
4 GBP/USD 2017-06-30 20:59:52 1.300930 1.303000    2581192      8603
5 GBP/USD 2017-06-30 20:59:54 1.301105 1.303450    2581194      8603
6 GBP/USD 2017-06-30 20:59:56 1.301453 1.304487    2581196      8603

indices2 <- seq_along(LowFrq5minuteRaw2[,6])[!(duplicated(LowFrq5minuteRaw2[,6]))] # 1  4; the beginnings of groups

LowFrq5minute <- data.frame(X1=c("GBP/USD"), X2=LowFrq5minuteRaw2[indices2,2], X3=aggregate(LowFrqData[,3] ~ QbyDto300, LowFrq5minuteRaw2, mean)[,2], X4=aggregate(LowFrqData[,4] ~ QbyDto300, LowFrq5minuteRaw2, mean)[,2])
LowFrq5minute

       X1                  X2       X3       X4
1 GBP/USD 2017-06-01 00:00:00 1.287550 1.287596
2 GBP/USD 2017-06-30 20:59:52 1.301163 1.303646

X2持有位于间隔上的5分钟obs的代表的第一次出现的时间戳。