Measure error between real and predicted value for a numerical variable

I have the following dataframe which contains two columns: real which is a column with real values and prediction which is a column with predicted values obtained from a Neural Network :

error_info = data.frame(

    real = c(4.29, 4.29, 2.58, 2.58, 2.58, 4.29, 4.29, 2.58, 2.58, 4.29, 4.29, 4.29, 4.29, 4.29, 4.29, 4.29, 4.29, 4.29, 4.29, 4.29, 4.29, 4.29, 4.29, 4.29, 4.29, 4.29, 4.29, 2.58, 4.29, 2.58, 4.29, 4.29, 2.58, 2.58, 4.29, 4.395, 2.58, 2.58, 4.29, 4.29, 4.29, 4.29, 2.58, 4.29, 4.29, 4.29, 4.29, 2.58, 4.29, 4.29, 4.29, 4.29, 4.29, 4.29, 4.29, 4.29, 4.395, 2.58, 4.29, 2.58, 2.58, 4.29, 4.29, 4.29, 4.29, 4.29, 2.58, 4.29, 2.58, 2.58, 4.29, 2.58, 2.58, 2.58, 4.29, 4.29, 4.29, 2.58, 4.29, 4.29, 4.395, 4.29, 4.29, 4.29, 4.29, 4.29, 4.29, 4.395, 4.29, 4.29, 4.29, 4.29, 4.29, 4.29, 4.29, 2.58, 4.29, 4.29, 4.29, 4.29, 4.29, 4.29, 4.29, 4.29, 4.29, 4.29, 4.29, 4.29, 4.29, 4.29, 3.69, 4.29, 7.5, 7.5, 3.69, 4.29, 4.29, 4.38, 3.69, 4.29, 4.29, 4.29, 3.69, 4.29, 2.355, 7.5, 2.355, 3.69, 4.29, 4.29, 3.69, 4.29, 4.38, 4.29, 4.29, 4.29, 4.29, 2.685, 4.29, 4.29, 4.29, 2.505, 4.29, 4.29, 3.69, 2.355, 3.69, 3.69, 4.29, 4.29, 4.29, 4.29, 3.69, 4.29, 3.69, 3.69, 2.355, 4.29, 4.29, 7.5, 4.29, 4.29, 4.29, 4.29, 3.69, 4.29, 4.29, 4.29, 4.995, 4.29, 4.29, 4.29, 4.29, 4.29, 4.29, 3.69, 3.69, 4.29, 4.29, 4.29, 4.29, 7.5, 4.29, 4.29, 4.29, 2.355, 2.355, 4.29, 4.995, 4.29, 4.995, 4.29, 4.29, 4.29, 4.29, 2.505, 4.29, 3.69, 4.29, 4.29, 4.29, 4.29, 4.29, 4.29, 4.29, 4.29, 4.29, 3.69, 4.29),
    
    prediction = c(3.577603, 4.294103, 4.192045, 4.208127, 3.644351, 4.241945, 4.119295, 3.497279, 2.124032, 4.221709, 4.245278, 4.28973, 4.281651, 4.337386, 4.090599, 4.144849, 4.197775, 4.236175, 3.824383, 4.201961, 4.235718, 4.247048, 4.101677, 4.32565, 4.442637, 4.166985, 4.191882, 3.323664, 3.930904, 3.402206, 3.530595, 4.320734, 3.793721, 3.694621, 4.289192, 4.203169, 4.200753, 3.483172, 4.242767, 4.111292, 4.286765, 4.266251, 4.100159, 3.606237, 4.244504, 4.319741, 4.188286, 4.117845, 4.29256, 3.77247, 4.134753, 4.312179, 4.337652, 4.317127, 3.79052, 4.304499, 3.640364, 3.299519, 4.060131, 3.193049, 4.197697, 4.305851, 4.442637, 4.26371, 4.182557, 4.189772, 3.644938, 4.164345, 2.466769, 3.34449, 3.570721, 3.443422, 3.638152, 3.793942, 4.299377, 4.081768, 4.219947, 4.218864, 4.442637, 4.189548, 3.742856, 4.335122, 4.310055, 4.322699, 4.216964, 4.339045, 4.318364, 3.365228, 4.286728, 4.223209, 4.280165, 4.295681, 3.949143, 4.273607, 4.272849, 4.339442, 4.340625, 3.924077, 4.193641, 3.367241, 4.279878, 4.341249, 4.03595, 4.266116, 2.690956, 4.124542, 4.243636, 4.223443, 4.186926, 4.19389, 4.365594, 4.197719, 2.582689, 4.16209, 2.582689, 3.445336, 4.206464, 3.665006, 3.417957, 4.262037, 4.181643, 4.332088, 3.737998, 4.215187, 2.644375, 4.234346, 4.244241, 3.584938, 4.256929, 4.193415, 3.719356, 4.169727, 3.950235, 3.299519, 4.219109, 4.219742, 4.345511, 3.486621, 4.256047, 2.776281, 3.767214, 3.478585, 4.295528, 4.190698, 4.213821, 2.583295, 3.660733, 3.488044, 3.522516, 4.162554, 3.937062, 4.347274, 3.554428, 3.660733, 2.582689, 2.638758, 2.857123, 4.244213, 4.185945, 4.168496, 4.282227, 4.322699, 4.272289, 4.207655, 2.22378, 4.355715, 4.21553, 4.18851, 4.339099, 4.297901, 2.74221, 3.40243, 4.222603, 4.209631, 4.312113, 3.656586, 2.682545, 4.252673, 4.263944, 4.258919, 3.527474, 3.299519, 3.525204, 4.19586, 4.26398, 3.678672, 3.635063, 4.16417, 2.582689, 3.520408, 3.707914, 4.282148, 4.261422, 4.216116, 4.246694, 4.324991, 4.019819, 4.012273, 4.314591, 4.288114, 4.312571, 4.216168, 4.214969, 4.195457, 4.442637, 3.405792, 4.023122, 3.48745, 4.205299)

)

Here is how this dataframe looks like:

str(error_info)
## 'data.frame':    209 obs. of  2 variables:
##  $ real      : num  4.29 4.29 2.58 2.58 2.58 4.29 4.29 2.58 2.58 4.29 ...
##  $ prediction: num  3.58 4.29 4.19 4.21 3.64 ...
error_info
##      real prediction
## 1   4.290   3.577603
## 2   4.290   4.294103
## 3   2.580   4.192045
## 4   2.580   4.208127
## 5   2.580   3.644351
## 6   4.290   4.241945
## 7   4.290   4.119295
## 8   2.580   3.497279
## 9   2.580   2.124032
## 10  4.290   4.221709
## 11  4.290   4.245278
## 12  4.290   4.289730
## 13  4.290   4.281651
## 14  4.290   4.337386
## 15  4.290   4.090599
## 16  4.290   4.144849
## 17  4.290   4.197775
## 18  4.290   4.236175
## 19  4.290   3.824383
## 20  4.290   4.201961
## 21  4.290   4.235718
## 22  4.290   4.247048
## 23  4.290   4.101677
## 24  4.290   4.325650
## 25  4.290   4.442637
## 26  4.290   4.166985
## 27  4.290   4.191882
## 28  2.580   3.323664
## 29  4.290   3.930904
## 30  2.580   3.402206
## 31  4.290   3.530595
## 32  4.290   4.320734
## 33  2.580   3.793721
## 34  2.580   3.694621
## 35  4.290   4.289192
## 36  4.395   4.203169
## 37  2.580   4.200753
## 38  2.580   3.483172
## 39  4.290   4.242767
## 40  4.290   4.111292
## 41  4.290   4.286765
## 42  4.290   4.266251
## 43  2.580   4.100159
## 44  4.290   3.606237
## 45  4.290   4.244504
## 46  4.290   4.319741
## 47  4.290   4.188286
## 48  2.580   4.117845
## 49  4.290   4.292560
## 50  4.290   3.772470
## 51  4.290   4.134753
## 52  4.290   4.312179
## 53  4.290   4.337652
## 54  4.290   4.317127
## 55  4.290   3.790520
## 56  4.290   4.304499
## 57  4.395   3.640364
## 58  2.580   3.299519
## 59  4.290   4.060131
## 60  2.580   3.193049
## 61  2.580   4.197697
## 62  4.290   4.305851
## 63  4.290   4.442637
## 64  4.290   4.263710
## 65  4.290   4.182557
## 66  4.290   4.189772
## 67  2.580   3.644938
## 68  4.290   4.164345
## 69  2.580   2.466769
## 70  2.580   3.344490
## 71  4.290   3.570721
## 72  2.580   3.443422
## 73  2.580   3.638152
## 74  2.580   3.793942
## 75  4.290   4.299377
## 76  4.290   4.081768
## 77  4.290   4.219947
## 78  2.580   4.218864
## 79  4.290   4.442637
## 80  4.290   4.189548
## 81  4.395   3.742856
## 82  4.290   4.335122
## 83  4.290   4.310055
## 84  4.290   4.322699
## 85  4.290   4.216964
## 86  4.290   4.339045
## 87  4.290   4.318364
## 88  4.395   3.365228
## 89  4.290   4.286728
## 90  4.290   4.223209
## 91  4.290   4.280165
## 92  4.290   4.295681
## 93  4.290   3.949143
## 94  4.290   4.273607
## 95  4.290   4.272849
## 96  2.580   4.339442
## 97  4.290   4.340625
## 98  4.290   3.924077
## 99  4.290   4.193641
## 100 4.290   3.367241
## 101 4.290   4.279878
## 102 4.290   4.341249
## 103 4.290   4.035950
## 104 4.290   4.266116
## 105 4.290   2.690956
## 106 4.290   4.124542
## 107 4.290   4.243636
## 108 4.290   4.223443
## 109 4.290   4.186926
## 110 4.290   4.193890
## 111 3.690   4.365594
## 112 4.290   4.197719
## 113 7.500   2.582689
## 114 7.500   4.162090
## 115 3.690   2.582689
## 116 4.290   3.445336
## 117 4.290   4.206464
## 118 4.380   3.665006
## 119 3.690   3.417957
## 120 4.290   4.262037
## 121 4.290   4.181643
## 122 4.290   4.332088
## 123 3.690   3.737998
## 124 4.290   4.215187
## 125 2.355   2.644375
## 126 7.500   4.234346
## 127 2.355   4.244241
## 128 3.690   3.584938
## 129 4.290   4.256929
## 130 4.290   4.193415
## 131 3.690   3.719356
## 132 4.290   4.169727
## 133 4.380   3.950235
## 134 4.290   3.299519
## 135 4.290   4.219109
## 136 4.290   4.219742
## 137 4.290   4.345511
## 138 2.685   3.486621
## 139 4.290   4.256047
## 140 4.290   2.776281
## 141 4.290   3.767214
## 142 2.505   3.478585
## 143 4.290   4.295528
## 144 4.290   4.190698
## 145 3.690   4.213821
## 146 2.355   2.583295
## 147 3.690   3.660733
## 148 3.690   3.488044
## 149 4.290   3.522516
## 150 4.290   4.162554
## 151 4.290   3.937062
## 152 4.290   4.347274
## 153 3.690   3.554428
## 154 4.290   3.660733
## 155 3.690   2.582689
## 156 3.690   2.638758
## 157 2.355   2.857123
## 158 4.290   4.244213
## 159 4.290   4.185945
## 160 7.500   4.168496
## 161 4.290   4.282227
## 162 4.290   4.322699
## 163 4.290   4.272289
## 164 4.290   4.207655
## 165 3.690   2.223780
## 166 4.290   4.355715
## 167 4.290   4.215530
## 168 4.290   4.188510
## 169 4.995   4.339099
## 170 4.290   4.297901
## 171 4.290   2.742210
## 172 4.290   3.402430
## 173 4.290   4.222603
## 174 4.290   4.209631
## 175 4.290   4.312113
## 176 3.690   3.656586
## 177 3.690   2.682545
## 178 4.290   4.252673
## 179 4.290   4.263944
## 180 4.290   4.258919
## 181 4.290   3.527474
## 182 7.500   3.299519
## 183 4.290   3.525204
## 184 4.290   4.195860
## 185 4.290   4.263980
## 186 2.355   3.678672
## 187 2.355   3.635063
## 188 4.290   4.164170
## 189 4.995   2.582689
## 190 4.290   3.520408
## 191 4.995   3.707914
## 192 4.290   4.282148
## 193 4.290   4.261422
## 194 4.290   4.216116
## 195 4.290   4.246694
## 196 2.505   4.324991
## 197 4.290   4.019819
## 198 3.690   4.012273
## 199 4.290   4.314591
## 200 4.290   4.288114
## 201 4.290   4.312571
## 202 4.290   4.216168
## 203 4.290   4.214969
## 204 4.290   4.195457
## 205 4.290   4.442637
## 206 4.290   3.405792
## 207 4.290   4.023122
## 208 3.690   3.487450
## 209 4.290   4.205299

What I need is:

Use a good error measure mehtod / formula / tool / technique / procedure that provides me just one number with the error, because that error will be used eventually to compare with the error of other predicted values. That way I can compare the performance of different Neural Networks .

What I have done so far is:

error = error_info[,1] - error_info[,2]
range = max(error_info[,1]) - min(error_info[,1])

I use the variable: range above so the resulting error can be compared with other eventual errors where the values are located in different min/max ranges.

Option 1: MAE (Mean Absolute Error)

# MAE (Mean Absolute Error)
mae = function(error) {
  mean(abs(error))
}

mae_error = mae(error) / range
mae_error
## [1] 0.09194361

Option 2: RMSE (Root Mean Square Error)

# RMSE (Root Mean Square Error)
rmse = function(error) {
  sqrt(mean(error ^ 2))
}

rmse_error = rmse(error) / range
rmse_error
## [1] 0.1689849

My questions are:

  1. What do you think about the two methods above? which one do you think is better?
  2. Do you know about any other better method?

Thanks!

Hi @tlg265,

I was wondering about the statement

I use the variable: range above so the resulting error can be compared with other eventual errors where the values are located in different min/max ranges.

So ... if you want to compare different models on the same data, then RMSE, MAE should be directly comparable without any scaling/normalization.

If you want to make the comparison across different datasets, then I'd say it becomes more problematic, because one approach can be better for some data, while another one can be better for another dataset. Typically, you could express the MAE and RMSE as a percentage of the the target (real) so you define your error as error = (real-predicted)/real and then just apply the RMSE or MAE to that relative error. That, though, becomes problematic when real is zero or close to zero, which seems not to be the case in your example data at least.

In terms of choosing a loss metric, at least when it comes to comparing RMSE and MAE, the former is much more sensitive to large deviations and will penalize more heavily than MAE - so it depends on the "business" loss - are you more "afraid" per unit of error as the deviation becomes larger - if so then I would choose RMSE.

Otherwise, I think that for typical regression problems (as opposed to classification), the MAE and RMSE are the most common loss metrics. You can of course also devise an asymmetric measure where e.g., positive deviations are penalized more than negative deviations, or vice versa, depends really on your problem.

Thank you @valeri for your response. Probably what we should do is check the value of the error, for example, the result of: mean(abs(error)) or sqrt(mean(error ^ 2)) and then check if we can tolerate that estimated error or not in relation to the values for the corresponding variable.

Thanks!

This topic was automatically closed 21 days after the last reply. New replies are no longer allowed.