$\begingroup$

I have 10 data frames pyspark.sql.dataframe.DataFrame , obtained from randomSplit as (td1, td2, td3, td4, td5, td6, td7, td8, td9, td10) = td.randomSplit([.1, .1, .1, .1, .1, .1, .1, .1, .1, .1], seed = 100) Now I want to join 9 td 's into a single data frame, how should I do that?

I have already tried with unionAll , but this function accepts only two arguments.

td1_2 = td1.unionAll(td2) # this is working fine td1_2_3 = td1.unionAll(td2, td3) # error TypeError: unionAll() takes exactly 2 arguments (3 given)

Is there any way to combine more than two data frames row-wise?