我已经使用tfagent创建了一个交易环境
env = TradingEnv(df=df.head(100000), lkb=1000)
tf_env = tf_py_environment.TFPyEnvironment(env)
并传递了100000行的df,其中仅使用收盘价,这是100000个股票价格时间序列数据的numpy数组
df: Date Open High Low Close volume
0 2015-02-02 09:15:00+05:30 586.60 589.70 584.85 584.95 171419
1 2015-02-02 09:20:00+05:30 584.95 585.30 581.25 582.30 59338
2 2015-02-02 09:25:00+05:30 582.30 585.05 581.70 581.70 52299
3 2015-02-02 09:30:00+05:30 581.70 583.25 581.70 582.60 44143
4 2015-02-02 09:35:00+05:30 582.75 584.00 582.75 582.90 42731
... ... ... ... ... ... ...
99995 2020-07-06 11:40:00+05:30 106.85 106.90 106.55 106.70 735032
99996 2020-07-06 11:45:00+05:30 106.80 107.30 106.70 107.25 1751810
99997 2020-07-06 11:50:00+05:30 107.30 107.50 107.10 107.35 1608952
99998 2020-07-06 11:55:00+05:30 107.35 107.45 107.10 107.20 959097
99999 2020-07-06 12:00:00+05:30 107.20 107.35 107.10 107.20 865438
在每一步中,代理可以访问以前的1000个价格+股票的当前价格= 1001
它可以从0、1、2中选择3种可能的操作
然后将其 Package 在TFPyEnvironment中以将其转换为tf_environment
代理可以观察到的价格是1D numpy数组。
价格= [584.95 582.3 581.7 ... 107.35 107.2 107.2 ]
时间步长规格
TimeStep Specs: TimeStep( {'discount': BoundedTensorSpec(shape=(), dtype=tf.float32,
name='discount', minimum=array(0., dtype=float32), maximum=array(1., dtype=float32)),
'observation': BoundedTensorSpec(shape=(1001,), dtype=tf.float32, name='_observation',
minimum=array(0., dtype=float32), maximum=array(3.4028235e+38, dtype=float32)), 'reward':
TensorSpec(shape=(), dtype=tf.float32, name='reward'), 'step_type': TensorSpec(shape=(),
dtype=tf.int32, name='step_type')}) Action Specs: BoundedTensorSpec(shape=(), dtype=tf.int32,
name='_action', minimum=array(0, dtype=int32), maximum=array(2, dtype=int32))
然后我构建了一个dqn代理,但我想使用Conv1d层构建它
我网络由Conv1D、MaxPool1D、Conv1D、MaxPool1D、密度64、密度32、q值层组成
我使用tf.keras.layers API创建了一个图层列表,并将其存储在dense_layers列表中,然后创建了一个顺序网络
代理
`learning_rate = 1e-3
action_tensor_spec = tensor_spec.from_spec(tf_env.action_spec())
num_actions = action_tensor_spec.maximum - action_tensor_spec.minimum + 1
dense_layers = []
dense_layers.append(tf.keras.layers.Conv1D(
64,
kernel_size=(10),
activation=tf.keras.activations.relu,
input_shape=(1,1001),
))
dense_layers.append(
tf.keras.layers.MaxPool1D(
pool_size=2,
strides=None,
padding='valid',
))
dense_layers.append(tf.keras.layers.Conv1D(
64,
kernel_size=(10),
activation=tf.keras.activations.relu,
))
dense_layers.append(
tf.keras.layers.MaxPool1D(
pool_size=2,
strides=None,
padding='valid',
))
dense_layers.append(
tf.keras.layers.Dense(
64,
activation=tf.keras.activations.relu,
))
dense_layers.append(
tf.keras.layers.Dense(
32,
activation=tf.keras.activations.relu,
))
q_values_layer = tf.keras.layers.Dense(
num_actions,
activation=None,
kernel_initializer=tf.keras.initializers.RandomUniform(
minval=-0.03, maxval=0.03),
bias_initializer=tf.keras.initializers.Constant(-0.2))
q_net = sequential.Sequential(dense_layers + [q_values_layer])`
`optimizer = tf.keras.optimizers.Adam(learning_rate=learning_rate)
train_step_counter = tf.Variable(0)
agent = dqn_agent.DqnAgent(
tf_env.time_step_spec(),
tf_env.action_spec(),
q_network=q_net,
optimizer=optimizer,
td_errors_loss_fn=common.element_wise_squared_loss,
train_step_counter=train_step_counter)
agent.initialize()`
但是当我将q_net作为q_network传递给DqnAgent时,我遇到了这个错误
`---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
in ()
68 optimizer=optimizer,
69 td_errors_loss_fn=common.element_wise_squared_loss,
---> 70 train_step_counter=train_step_counter)
71
72 agent.initialize()
7 frames
/usr/local/lib/python3.7/dist-packages/tf_agents/networks/sequential.py in call(self, inputs,
network_state, **kwargs)
222 else:
223 # Does not maintain state.
--> 224 inputs = layer(inputs, **layer_kwargs)
225
226 return inputs, tuple(next_network_state)
ValueError: Exception encountered when calling layer "sequential_54" (type Sequential).
Input 0 of layer "conv1d_104" is incompatible with the layer: expected min_ndim=3, found
ndim=2. Full shape received: (1, 1001)
Call arguments received by layer "sequential_54" (type Sequential):
• inputs=tf.Tensor(shape=(1, 1001), dtype=float32)
• network_state=()
• kwargs={'step_type': 'tf.Tensor(shape=(1,), dtype=int32)', 'training': 'None'}
In call to configurable 'DqnAgent' (<class 'tf_agents.agents.dqn.dqn_agent.DqnAgent'>)`
我知道它与第一层cov1d的输入形状有关,但不能弄清楚我做错了什么
在每个time_step,代理接收长度为1001的1d数组的价格观察,则conv1d的输入形状应为(1,1001),但这是错误的,我不知道如何解决此错误
需要帮助
1条答案
按热度按时间elcex8rz1#
不幸的是,TF-Agents不支持
Conv1D layers
。几乎所有的网络类都使用EncodingNetwork
类来构建它们的网络。如果你查看他们的github代码或文档,他们确实在EncodingNetwork
中提供了Conv1D layer
,但是默认情况下它被设置为Conv2D
,并且没有网络类有设置conv_type
的参数。但是,有一个解决方案。只需复制您要使用的网络,并更改调用EncodingNetwork的行,以便将conv_type设置为1D。我在这里也公开了一个关于此问题的github问题:
https://github.com/tensorflow/agents/issues/779