Managed Projects

Apache Spark

Claimed by Apache Software Foundation Analyzed about 19 hours ago

Apache Spark is an open source cluster computing system that aims to make data analytics fast — both fast to run and fast to write. To run programs faster, Spark provides primitives for in-memory cluster computing: your job can load data into memory and query it repeatedly more rapidly than with ... [More] disk-based systems like Hadoop. To make programming faster, Spark offers high-level APIs in Scala, Java and Python, letting you manipulate distributed datasets like local collections. You can also use Spark interactively to query big data from the Scala or Python shells. Spark integrates closely with Hadoop to run inside Hadoop clusters and can access any existing Hadoop data source. [Less]

1.28M lines of code

374 current contributors

2 days since last commit

56 users on Open Hub

Very High Activity
5.0
 
I Use This

Shark - Hive on Spark

  Analyzed about 16 hours ago

Hive on Spark

17K lines of code

0 current contributors

over 9 years since last commit

1 users on Open Hub

Inactive
0.0
 
I Use This