Tags : Browse Projects

Select a tag to browse associated projects and drill deeper into the tag cloud.

deeplearning4j

Compare

  Analyzed about 10 hours ago

Deeplearning4j is the first commercial-grade, open-source, distributed deep-learning library; designed to be used in business environments. Deeplearning4j aims to be cutting-edge plug and play, more convention than configuration, which allows for fast prototyping for non-researchers. Vast ... [More] support of scale out: Hadoop, Spark and Akka + AWS et al It includes both a distributed, multi-threaded deep-learning framework and a normal single-threaded deep-learning framework. Iterative reduce net training. First framework adapted for a micro-service architecture. A versatile n-dimensional array class. GPU integration [Less]

1.1M lines of code

17 current contributors

5 months since last commit

5 users on Open Hub

Very Low Activity
4.0
   
I Use This

eANN

Compare

  Analyzed about 3 hours ago

eANN is an implementation of several kind of neural networks written with the intention of providing a (hopefully) easy to use, and easy to modify, OOP source code. It is possible to have several different sized networks running simultaneously, each functioning independently of the others or ... [More] acting as inputs between them. It also easy to modify the structure so that neurons (or even whole layers) can be created/pruned during simulation allowing dynamic expansion/contraction of the network. Networks Implemented: * Multi Layer Neural Network with Backpropagation * Competitive Neural Network * Radial Basis Neural Network * Progressive Radial Neural Network * Progressive Learning Neural Network [Less]

17.5K lines of code

1 current contributors

about 5 years since last commit

1 users on Open Hub

Inactive
0.0
 
I Use This

delira

Compare

  Analyzed about 23 hours ago

Lightweight framework for fast prototyping and training deep neural networks with PyTorch and TensorFlow

13.7K lines of code

12 current contributors

about 4 years since last commit

1 users on Open Hub

Inactive
0.0
 
I Use This

Lasagne

Compare

  Analyzed about 19 hours ago

neural network tools for Theano

12.4K lines of code

0 current contributors

over 4 years since last commit

0 users on Open Hub

Inactive
0.0
 
I Use This
Licenses: No declared licenses

nolearn

Compare

  Analyzed about 22 hours ago

Miscellaneous utilities for machine learning.

3.98K lines of code

1 current contributors

over 4 years since last commit

0 users on Open Hub

Inactive
0.0
 
I Use This
Licenses: No declared licenses

Cloudml Zen

Compare

  Analyzed about 17 hours ago

Zen aims to provide the largest scale and the most efficient machine learning platform on top of Spark, including but not limited to logistic regression, latent dirichilet allocation, factorization machines and DNN.

15.7K lines of code

1 current contributors

over 5 years since last commit

0 users on Open Hub

Inactive
0.0
 
I Use This

DMLC Minerva

Compare

  Analyzed about 1 hour ago

Minerva: a fast and flexible tool for deep learning on multi-GPU. It provides ndarray programming interface, just like Numpy. Python bindings and C++ bindings are both available. The resulting code can be run on CPU or GPU. Multi-GPU support is very easy.

185K lines of code

0 current contributors

over 8 years since last commit

0 users on Open Hub

Inactive
0.0
 
I Use This

Autumn Leaf

Compare

  Analyzed about 15 hours ago

The Hacker's Machine Intelligence Framework engineered by software developers, not scientists. Leaf is portable. Run it on CPUs, GPUs, FPGAs on machines with an OS or on machines without one. Run it with OpenCL or CUDA. Credit goes to Collenchyma and Rust. Leaf is part of the Autumn Machine ... [More] Intelligence Platform, which is working on making AI algorithms 100x more computational efficient. Bringing real-time, offline AI to smartphones and embedded devices. Core for high-performance machine intelligence applications. Leafs' design makes it easy to publish independent modules to make e.g. deep reinforcement learning, visualization and monitoring, network distribution, automated preprocessing or scaleable production deployment easily accessible for everyone. [Less]

7.31K lines of code

0 current contributors

over 6 years since last commit

0 users on Open Hub

Inactive
0.0
 
I Use This

veles

Compare

  Analyzed about 4 hours ago

Distributed machine learning platform. Distributed platform for rapid Deep learning application development Consists of: Platform - https://github.com/Samsung/veles Znicz Plugin - Neural Network engine Mastodon - Veles Java bridge for Hadoop etc. SoundFeatureExtraction - audio feature ... [More] extraction library Written on Python, uses OpenCL or CUDA, employs Flow-Based Programming, under Apache 2.0. 1 Deploy VELES on Notebook or Cluster with a single command 2 Create the model from >250 optimized units 3 Analyze and serve the dataset on the go using Loaders 4 Train it on PC or High Performance Cluster Interactively monitor the training process 5 Publish the results 6 Automatically extract the trained model as an application 7 Run it in the cloud [Less]

68.8K lines of code

0 current contributors

5 months since last commit

0 users on Open Hub

Very Low Activity
0.0
 
I Use This

Deep Scalable Sparse Tensor Network Engine (DSSTNE)

Compare

  Analyzed about 9 hours ago

Amazon DSSTNE: Deep Scalable Sparse Tensor Network Engine DSSTNE (pronounced "Destiny") is a library for training and deploying deep neural networks using GPUs. It is build to solve deep learning problems at Amazon's scale. It is built for production deployment of real-world deep learning ... [More] applications, emphasizing speed and scale over experimental flexibility. Multi-GPU Scale: Training and prediction both scale out to use multiple GPUs, spreading out computation and storage in a model-parallel fashion for each layer. Large Layers: Model-parallel scaling enables larger networks than are possible with a single GPU. Sparse Data: DSSTNE is optimized for fast performance on sparse datasets. Custom GPU kernels perform sparse computation on the GPU, without filling in lots of zeroes. [Less]

41K lines of code

6 current contributors

about 4 years since last commit

0 users on Open Hub

Inactive
0.0
 
I Use This