pandas 非时间序列数据

nle07wnf  于 2023-11-15  发布在  其他
关注(0)|答案(3)|浏览(115)

我有一些数据,我正在处理的框架和Pandas。他们包含约10 000行和6列。
问题是,我做了几次试验,不同的数据集有稍微不同的指数。(这是一个“力-长度”测试,有几种材料,当然测量点没有完全对齐。)
现在我的想法是,使用包含长度值的索引对数据进行“重采样”。似乎pandas中的resource函数只适用于日期时间数据集。
我尝试通过to_datetime转换索引并成功。但在重新排序后,我需要回到原始的scale。某种from_datetime函数。
有没有什么方法或者我是不是完全走错了路,应该更好地使用像groupby这样的功能?

编辑添加:

数据如下所示。长度用作索引。在这些 Dataframe 中,我有几个,所以将它们对齐到相同的“帧率”,然后将它们剪切,例如,这样我就可以比较不同的数据集。
我已经尝试过的想法是这样的:

df_1_dt = df_1  # generate a table for the conversion
# convert it simulating seconds.. good idea?!
df_1_dt.index = pd.to_datetime(df_1_dt.index, unit='s')
df_1_dt_rs = df_1_dt  # generate a df for the resampling
df_1_dt_rs = df_1_dt_rs.resample(rule='s')  # resample by the generated time

字符串
数据类型:

¦  Index (Length)   ¦    Force1     ¦    Force2     ¦  
¦-------------------+---------------+---------------¦  
¦ 8.04662074828e-06 ¦ 4.74251270294 ¦ 4.72051584721 ¦  
¦ 8.0898882798e-06  ¦ 4.72051584721 ¦ 4.72161570191 ¦  
¦ 1.61797765596e-05 ¦ 4.69851899147 ¦ 4.72271555662 ¦  
¦ 1.65476570973e-05 ¦ 4.65452528    ¦ 4.72491526604 ¦  
¦ 2.41398605024e-05 ¦ 4.67945501539 ¦ 4.72589291467 ¦  
¦ 2.42696630876e-05 ¦ 4.70438475079 ¦ 4.7268705633  ¦  
¦ 9.60953101751e-05 ¦ 4.72931448619 ¦ 4.72784821192 ¦  
¦ 0.00507703541206  ¦ 4.80410369237 ¦ 4.73078115781 ¦  
¦ 0.00513927175509  ¦ 4.87889289856 ¦ 4.7337141037  ¦  
¦ 0.00868965311878  ¦ 4.9349848032  ¦ 4.74251282215 ¦  
¦ 0.00902026197556  ¦ 4.99107670784 ¦ 4.7513115406  ¦  
¦ 0.00929150878827  ¦ 5.10326051712 ¦ 4.76890897751 ¦  
¦ 0.0291729332784   ¦ 5.14945375919 ¦ 4.78650641441 ¦  
¦ 0.0296332588857   ¦ 5.17255038023 ¦ 4.79530513287 ¦  
¦ 0.0297080942518   ¦ 5.19564700127 ¦ 4.80410385132 ¦  
¦ 0.0362595526707   ¦ 5.2187436223  ¦ 4.80850321054 ¦  
¦ 0.0370305483177   ¦ 5.24184024334 ¦ 4.81290256977 ¦  
¦ 0.0381506204153   ¦ 5.28803348541 ¦ 4.82170128822 ¦  
¦ 0.0444440795306   ¦ 5.30783069134 ¦ 4.83050000668 ¦  
¦ 0.0450121369102   ¦ 5.3177292943  ¦ 4.8348993659  ¦  
¦ 0.0453465140473   ¦ 5.32762789726 ¦ 4.83929872513 ¦  
¦ 0.0515533437013   ¦ 5.33752650023 ¦ 4.85359662771 ¦  
¦ 0.05262489708     ¦ 5.34742510319 ¦ 4.8678945303  ¦  
¦ 0.0541273847206   ¦ 5.36722230911 ¦ 4.89649033546 ¦  
¦ 0.0600755845953   ¦ 5.37822067738 ¦ 4.92508614063 ¦  
¦ 0.0607712385295   ¦ 5.38371986151 ¦ 4.93938404322 ¦  
¦ 0.0612954159368   ¦ 5.38921904564 ¦ 4.9536819458  ¦  
¦ 0.0670288249293   ¦ 5.39471822977 ¦ 4.97457891703 ¦  
¦ 0.0683640870058   ¦ 5.4002174139  ¦ 4.99547588825 ¦  
¦ 0.0703192637772   ¦ 5.41121578217 ¦ 5.0372698307  ¦  
¦ 0.0757871634772   ¦ 5.43981158733 ¦ 5.07906377316 ¦  
¦ 0.0766597757545   ¦ 5.45410948992 ¦ 5.09996074438 ¦  
¦ 0.077317850103    ¦ 5.4684073925  ¦ 5.12085771561 ¦  
¦ 0.0825991083545   ¦ 5.48270529509 ¦ 5.13295596838 ¦  
¦ 0.0841354654428   ¦ 5.49700319767 ¦ 5.14505422115 ¦  
¦ 0.0865525182528   ¦ 5.52559900284 ¦ 5.1692507267  ¦

7vux5j2d

7vux5j2d1#

我发现如何通过使用reindex和interpolate来做到这一点。
这就是结果:蓝色点是原始数据,红色线点是重新索引/插值数据。
enter image description here
这是代码

import pandas as pd
import numpy as np
import matplotlib.pyplot as plt

df = pd.DataFrame({'X' : [1.1, 2.05, 3.07, 4.2],
                   'Y1': [10.1, 15.2, 35.3, 40.4],
                   'Y2': [55.05, 40.4, 84.17, 31.5]})
print(df)

df.set_index('X',inplace =True)
print(df)

Xresampled = np.linspace(1,4,15)
print(Xresampled)

#Resampling
#df = df.reindex(df.index.union(resampling))

#Interpolation technique to use. One of:

#'linear': Ignore the index and treat the values as equally spaced. This is the only method supported on MultiIndexes.
#'time': Works on daily and higher resolution data to interpolate given length of interval.
#'index', 'values': use the actual numerical values of the index.
#'pad': Fill in NaNs using existing values.
#'nearest', 'zero', 'slinear', 'quadratic', 'cubic', 'spline', 'barycentric', 'polynomial': Passed to scipy.interpolate.interp1d. These methods use the numerical values of the index. Both 'polynomial' and 'spline' require that you also specify an order (int), e.g. df.interpolate(method='polynomial', order=5).
#'krogh', 'piecewise_polynomial', 'spline', 'pchip', 'akima': Wrappers around the SciPy interpolation methods of similar names. See Notes.
#'from_derivatives': Refers to scipy.interpolate.BPoly.from_derivatives which replaces 'piecewise_polynomial' interpolation method in scipy 0.18.

df_resampled = df.reindex(df.index.union(Xresampled)).interpolate('values').loc[Xresampled]
print(df_resampled)

# gca stands for 'get current axis'
ax = plt.gca()
df.plot(           style='X',  y='Y2', color = 'blue', ax=ax, label = 'Original Data'     )
df_resampled.plot( style='.-', y='Y2', color = 'red',  ax=ax, label = 'Interpolated Data' )
ax.set_ylabel('Y1')
plt.show()

字符串

83qze16e

83qze16e2#

听起来你想做的就是把长度数字四舍五入到一个较低的精度。
如果是这种情况,你可以使用内置的舍入函数:
(虚拟数据)

>>> df=pd.DataFrame([[1.0000005,4],[1.232463632,5],[5.234652,9],[5.675322,10]],columns=['length','force'])
>>> df
33:      length  force
0  1.000001      4
1  1.232464      5
2  5.234652      9
3  5.675322     10
>>> df['rounded_length'] = df.length.apply(round, ndigits=0)
>>> df
34:      length  force  rounded_length
0  1.000001      4             1.0
1  1.232464      5             1.0
2  5.234652      9             5.0
3  5.675322     10             6.0
>>>

字符串
然后你可以使用groupby复制resample()....工作流:

>>> df.groupby('rounded_length').mean().force
35: rounded_length
1.0     4.5
5.0     9.0
6.0    10.0
Name: force, dtype: float64


一般来说,重采样只用于日期。如果你用它来处理日期以外的事情,可能有一个更优雅的解决方案!

ego6inou

ego6inou3#

我有一个和你非常相似的问题,我找到了一个解决方案。这个解决方案基本上是
整合->插值->差异化
首先,我将描述我正在解决的问题,以确保我们在同一页上。一个简单的例子是,如果你有点(x1,y1)和(x2,y2),你想要(x0 ',y0'),(x1',y1')(你知道x 0 ',x1',想找到y1 '),x 0' < x1< x1' < x2,然后你想取加权平均值,因此y1' =((x1' - x1)* y2 +(x1-x 0 ')* y1)/(x1' -x 0 ')。
假设你有一个 Dataframe ,列'x''y',但是你想重新采样到一个新的x new_x,也就是numpy. ndar。

df['integral'] = (df['y'] * (df['x'] - df['x'].shift(1))).cumsum()
new_integral = np.interp(new_x, df['x'].values, df['integral'].values, left=0., right=np.nan)
new = pd.DataFrame({'new_x': new_x, 'integral': new_integral})
new['y'] = (new['integral'] - new['integral'].shift(1)) / (new['new_x'] - new['new_x'].shift(1))

字符串
我会用0.开始new_x,然后从新的 Dataframe 中删除第一个值,因为它将是NaN。你也可以用任何你想要的东西填充和尾随NaN
我希望这能解决你的问题。我没有证明这个方法解决了上面定义的问题,但这并不难证明。

相关问题