Hi: I've successfully created an optimization framework using torch, but I'm not sure how to implement the loss function: I have a tensor containing predicted values (x, yhat), and I have a tensor containing actual values (x, y): I need to effectively join these (on x) and then aggregate across the "rows" as SUM((yhat - y)^2) . It's not obvious to me how to do this efficiently: I've been able to use torch_cat to implement the join, so I now have columns x, y and yhat (although many of the y values are 'nan' due to missing data. (But I had to achieve this by converting both actual and predicted tensors to data frames, then join them and then extract the joined yhat column as a tensor to concatenate onto the predicted tensor, which doesn't seem very efficient). And now I don't know whether I can do better than explicitly looping over every "row" of my "joined" tensor and incrementally accumulating the sum of squares. Surprisingly there seems no built-in or package functions available for this sort of operation, although I'm sure I can't be the first person to want to do it! Any help gratefully appreciated! (Note: I need to implement the RSS via tensor operations so that the autograd works.)

This topic was automatically closed 42 days after the last reply. New replies are no longer allowed.

If you have a query related to it or one of the replies, start a new topic and refer back with a link.