Fast FDTD solver with graphics card support. Optimized for nanoscale optics - scanning near field optical microscopy, rough surface scattering and solar cells. Uses CUDA environment for graphics card operation.
The Hacker's Machine Intelligence Framework engineered by software developers, not scientists.
Leaf is portable. Run it on CPUs, GPUs, FPGAs on machines with an OS or on machines without one. Run it with OpenCL or CUDA. Credit goes to Collenchyma and Rust.
Leaf is part of the Autumn Machine
... [More] Intelligence Platform, which is working on making AI algorithms 100x more computational efficient. Bringing real-time, offline AI to smartphones and embedded devices.
Core for high-performance machine intelligence applications. Leafs' design makes it easy to publish independent modules to make e.g. deep reinforcement learning, visualization and monitoring, network distribution, automated preprocessing or scaleable production deployment easily accessible for everyone. [Less]
Distributed machine learning platform.
Distributed platform for rapid Deep learning application development
Consists of:
Platform - https://github.com/Samsung/veles
Znicz Plugin - Neural Network engine
Mastodon - Veles Java bridge for Hadoop etc.
SoundFeatureExtraction - audio feature
... [More] extraction library
Written on Python, uses OpenCL or CUDA, employs Flow-Based Programming, under Apache 2.0.
1 Deploy VELES on Notebook or Cluster with a single command
2 Create the model from >250 optimized units
3 Analyze and serve the dataset on the go using Loaders
4 Train it on PC or High Performance Cluster
Interactively monitor the training process 5 Publish the results 6 Automatically extract the trained model as an application 7 Run it in the cloud [Less]
MUMmerGPU is a high-throughput DNA sequence alignment program that runs on nVidia G80-class GPUs. It aligns sequences in parallel on the video card to achieve a more than 3-fold speedup over the widely used serial CPU program MUMmer.
This site uses cookies to give you the best possible experience.
By using the site, you consent to our use of cookies.
For more information, please see our
Privacy Policy