| arrange {SparkR} | R Documentation |
Sort a SparkDataFrame by the specified column(s).
Defines the ordering columns in a WindowSpec.
arrange(x, col, ...) orderBy(x, col, ...) ## S4 method for signature 'SparkDataFrame,Column' arrange(x, col, ...) ## S4 method for signature 'SparkDataFrame,character' arrange(x, col, ..., decreasing = FALSE) ## S4 method for signature 'SparkDataFrame,characterOrColumn' orderBy(x, col, ...) ## S4 method for signature 'WindowSpec,character' orderBy(x, col, ...) ## S4 method for signature 'WindowSpec,Column' orderBy(x, col, ...)
x |
A SparkDataFrame to be sorted. |
col |
A character or Column object vector indicating the fields to sort on |
... |
Additional sorting fields |
decreasing |
A logical argument indicating sorting order for columns when a character vector is specified for col |
x |
a WindowSpec |
A SparkDataFrame where all elements are sorted.
a WindowSpec
Other SparkDataFrame functions: SparkDataFrame-class,
[[, agg,
as.data.frame, attach,
cache, collect,
colnames, coltypes,
columns, count,
dapply, describe,
dim, distinct,
dropDuplicates, dropna,
drop, dtypes,
except, explain,
filter, first,
group_by, head,
histogram, insertInto,
intersect, isLocal,
join, limit,
merge, mutate,
ncol, persist,
printSchema,
registerTempTable, rename,
repartition, sample,
saveAsTable, selectExpr,
select, showDF,
show, str,
take, unionAll,
unpersist, withColumn,
write.df, write.jdbc,
write.json, write.parquet,
write.text
Other windowspec_method: partitionBy,
rangeBetween, rowsBetween
## Not run:
##D sc <- sparkR.init()
##D sqlContext <- sparkRSQL.init(sc)
##D path <- "path/to/file.json"
##D df <- read.json(sqlContext, path)
##D arrange(df, df$col1)
##D arrange(df, asc(df$col1), desc(abs(df$col2)))
##D arrange(df, "col1", decreasing = TRUE)
##D arrange(df, "col1", "col2", decreasing = c(TRUE, FALSE))
## End(Not run)
## Not run:
##D orderBy(ws, "col1", "col2")
##D orderBy(ws, df$col1, df$col2)
## End(Not run)