False Negative Error: org.apache.spark.sql.AnalysisException: cannot resolve 'A' given input columns

I am facing a challenge to execute my R code in sparklyr (version 1.7.7) , where it throws an error "cannot resolve a 'column'" which means the column is not present but the column is present in my spark data frame. The same R code is working in previous sparklyr version (1.7.5).
I am guessing it fails to identify the columns when it goes beyond 100 columns or so.
Even the error is not constant, it either throws "column not found" or "invalid data type casting (decimal vs big decimal) or (date vs timestamp)".
Below is the link for code snippet and approaches taken to resolve it

I am satisfied with the solution provided by @edgararuiz after installing the dev version of the sparklyr my code executed successfully. Thanks.

This topic was automatically closed 42 days after the last reply. New replies are no longer allowed.

If you have a query related to it or one of the replies, start a new topic and refer back with a link.