Sunday, October 21, 2018

10 Open-Source Tools/Frameworks for Artificial Intelligence

TensorFlow

An open-source software library for Machine Intelligence.

TensorFlow™ is an open-source software library, which was originally developed by researchers and engineers working on the Google Brain Team. TensorFlow is for numerical computation using data flow graphs. Nodes in the graph represent mathematical operations, while the graph edges represent the multidimensional data arrays (tensors) communicated between them. The flexible architecture allows you to deploy computation to one or more CPUs or GPUs in a desktop, server, or mobile device with a single API.
TensorFlow provides multiple APIs. The lowest level API — TensorFlow Core — provides you with complete programming control. The higher level APIs are built on top of TensorFlow Core. These higher level APIs are typically easier to learn and use than TensorFlow Core. In addition, the higher level APIs make repetitive tasks easier and more consistent between different users. A high-level API like tf.estimator helps you manage data sets, estimators, training, and inference.
The central unit of data in TensorFlow is the tensor. A tensor consists of a set of primitive values shaped into an array of any number of dimensions. A tensor's rank is its number of dimensions.
A few Google applications using tensor flow are:
RankBrain: A large-scale deployment of deep neural nets for search ranking on www.google.com
Inception Image Classification Model: Baseline model and follow-on research into highly accurate computer vision models, starting with the model that won the 2014 Imagenet image classification challenge
SmartReply: Deep LSTM model to automatically generate email responses
Massively Multitask Networks for Drug Discovery: A deep neural network model for identifying promising drug candidates by Google in association with Stanford University.
On-Device Computer Vision for OCR: On-device computer vision model to do optical character recognition to enable real-time translation

Useful Links

Tensorflow home
GitHub
Getting started

Apache SystemML

An optimal workplace for machine learning using big data.

SystemML, the machine-learning technology created at IBM, has reached one of the top-level project status at the Apache Software Foundation and it’s a flexible, scalable, machine learning system. Important characteristics are:
Algorithm customizability via R-like and Python-like languages.
Multiple execution modes, including Spark MLContext, Spark Batch, Hadoop Batch, Standalone, and JMLC (Java Machine Learning Connector).
Automatic optimization based on data and cluster characteristics to ensure both efficiency and scalability.
SystemML considered as SQL for Machine learning. Latest version (1.0.0) of SystemML supports: Java 8+, Scala 2.11+, Python 2.7/3.5+, Hadoop 2.6+, and Spark 2.1+.
It can be run on top of Apache Spark, where it automatically scales your data line by line, determining whether your code should be run on the driver or an Apache Spark cluster. Future SystemML developments include additional deep learning with GPU capabilities such as importing and running neural network architectures and pre-trained models for training.
Java Machine Learning Connector (JMLC) for SystemML
The Java Machine Learning Connector (JMLC) API is a programmatic interface for interacting with SystemML in an embedded fashion. The primary purpose of JMLC is as a scoring API, where your scoring function is expressed using SystemML’s DML (Declarative Machine Learning) language. In addition to scoring, embedded SystemML can be used for tasks such as unsupervised learning (for example, clustering) in the context of a larger application running on a single machine.

Useful Links

SystemML home
GitHub

Caffe

A deep learning framework made with expression, speed, and modularity in mind.

The Caffe project was initiated by Yangqing Jia during his Ph.D. at UC Berkeley and then later developed by Berkeley AI Research (BAIR) and by community contributors. It mostly focusses on convolutional networks for computer vision applications. Caffe is a solid and popular choice for computer vision-related tasks and you can download many successful models made by Caffe users from the Caffe Model Zoo (link below) for out-of-the-box use.

Caffe Advantages

Expressive architecture encourages application and innovation. Models and optimization are defined by configuration without hard-coding. Switch between CPU and GPU by setting a single flag to train on a GPU machine then deploy to commodity clusters or mobile devices.
Extensible code fosters active development. In Caffe’s first year, it has been forked by over 1,000 developers and had many significant changes contributed back.
Speed makes Caffe perfect for research experiments and industry deployment. Caffe can process over 60M images per day with a single NVIDIA K40 GPU.
Community: Caffe already powers academic research projects, startup prototypes, and even large-scale industrial applications in vision, speech, and multimedia.

Useful Links

Caffe home
GitHub
Caffe user group
Tutorial presentation of the framework and a full-day crash course
Caffe Model Zoo

Apache Mahout

A distributed linear algebra framework and mathematically expressive Scala DSL

Mahout was designed to let mathematicians, statisticians, and data scientists quickly implement their own algorithms. Apache Spark is the recommended out-of-the-box distributed backend or can be extended to other distributed backends.
Mathematically Expressive Scala DSL
Support for Multiple Distributed Backends (including Apache Spark)
Modular Native Solvers for CPU/GPU/CUDA Acceleration
Apache Mahout currently implements areas including Collaborative filtering (CF), Clustering and Categorization

Features/Applications

Taste CF. Taste is an open-source project for CF (collaborative filtering) started by Sean Owen on SourceForge and donated to Mahout in 2008.
Several Map-Reduce enabled clustering implementations including k-Means, fuzzy k-Means, Canopy, Dirichlet, and Mean-Shift.
Distributed Naive Bayes and Complementary Naive Bayes classification implementations.
Distributed fitness function capabilities for evolutionary programming.
Matrix and vector libraries.
Examples of all of the above algorithms.

Useful Links

Mahout home
GitHub
Intro to Mahout by Grant Ingersoll

OpenNN 

An open-source class library written in C++, which implements neural networks.
OpenNN (Open Neural Networks Library) was formerly known as Flood is based on the Ph.D. thesis of R. Lopez, "Neural Networks for Variational Problems in Engineering," at Technical University of Catalonia, 2008.

OpenNN implements data mining methods as a bundle of functions. These can be embedded in other software tools using an application programming interface (API) for the interaction between the software tool and the predictive analytics tasks. The main advantage of OpenNN is its high performance. It is developed in C++ for better memory management and higher processing speed and implements CPU parallelization by means of OpenMP and GPU acceleration with CUDA.
The package comes with unit testing, many examples, and extensive documentation. It provides an effective framework for the research and development of neural networks algorithms and applications. Neural Designer is a professional predictive analytics tool that uses OpenNN, which means that the neural engine of Neural Designer has been built using OpenNN.
OpenNN has been designed to learn from both datasets and mathematical models.

Datasets

Function regression.
Pattern recognition.
Time series prediction.

Mathematical Models

Optimal control.
Optimal shape design.

Datasets and Mathematical Models

Inverse problems.

Useful Links 

OpenNN home 
OpenNN Artelnics GitHub
Neural Designer

Torch

An open-source machine learning library, a scientific computing framework, and a script language based on the Lua programming language.

  • a powerful N-dimensional array
  • lots of routines for indexing, slicing, transposing, …
  • amazing interface to C, via LuaJIT
  • linear algebra routines
  • neural network, and energy-based models
  • numeric optimization routines
  • Fast and efficient GPU support
  • Embeddable, with ports to iOS and Android backends
Torch is used by the Facebook AI Research Group, IBM, Yandex, and the Idiap Research Institute. It has been extended for use on Android and iOS and has been used to build hardware implementations for data flows like those found in neural networks.
Facebook has released a set of extension modules as open source software.
PyTorch is an open-source machine learning library for Python, used for applications such as natural language processing. It is primarily developed by Facebook's artificial intelligence research group, and Uber's "Pyro" software for probabilistic programming is built upon it.

Useful Links

Torch Home
GitHub

Neuroph

An object-oriented neural network framework written in Java.

Neuroph can be used to create and train neural networks in Java programs. Neuroph provides Java class library as well as GUI tool easyNeurons for creating and training neural networks. Neuroph is lightweight Java neural network framework to develop common neural network architectures. It contains a well designed, open-source Java library with a small number of basic classes that correspond to basic NN concepts. It also has nice GUI neural network editor to quickly create Java neural network components. It has been released as open source under the Apache 2.0 license.
Neuroph's core classes correspond to basic neural network concepts like artificial neuron, neuron layer, neuron connections, weight, transfer function, input function, learning rule, etc. Neuroph supports common neural network architectures such as Multilayer perceptron with Backpropagation, Kohonen and Hopfield networks. All these classes can be extended and customized to create custom neural networks and learning rules. Neuroph has built-in support for image recognition.

Useful Links

Neuroph Home
GitHub

Deeplearning4j

The first commercial-grade, open-source, distributed deep-learning library written for Java and Scala.

Deeplearning4j aims to be cutting-edge plug and play and more convention than configuration, which allows for fast prototyping for non-researchers.
DL4J is customizable at scale.
DL4J can import neural net models from most major frameworks via Keras, including TensorFlow, Caffe and Theano, bridging the gap between the Python ecosystem and the JVM with a cross-team toolkit for data scientists, data engineers and DevOps. Keras is employed as Deeplearning4j's Python API.
Machine learning models are served in production with Skymind's model server.

Features

  • Distributed CPUs and GPUs
  • Java, Scala and Python APIs
  • Adapted for micro-service architecture
  • Parallel training via iterative reduce
  • Scalable on Hadoop
  • GPU support for scaling on AWS

Libraries:

  • Deeplearning4J: Neural Net Platform
  • ND4J: Numpy for the JVM
  • DataVec: Tool for Machine Learning ETL Operations
  • JavaCPP: The Bridge Between Java and Native C++
  • Arbiter: Evaluation Tool for Machine Learning Algorithms
  • RL4J: Deep Reinforcement Learning for the JVM

Mycroft

Claiming as the world’s first open-source assistant and may be used in anything from a science project to an enterprise software application.

Mycroft runs anywhere — on a desktop computer, inside an automobile, or on a Raspberry Pi. This is open source software which can be freely remixed, extended, and improved. Mycroft may be used in anything from a science project to an enterprise software application.

OpenCog

OpenCog is a project that aims to build an open-source artificial intelligence framework

OpenCog is a diverse assemblage of cognitive algorithms, each embodying their own innovations — but what makes the overall architecture powerful is its careful adherence to the principle of cognitive synergy. OpenCog was originally based on the release in 2008 of the source code of the proprietary "Novamente Cognition Engine" (NCE) of Novamente LLC. The original NCE code is discussed in the PLN book (ref below). Ongoing development of OpenCog is supported by Artificial General Intelligence Research Institute (AGIRI), the Google Summer of Code project, and others.
  • A graph database that holds terms, atomic formulas, sentences and relationships as hypergraphs; giving them a probabilistic truth-value interpretation, dubbed the AtomSpace.
  • A satisfiability modulo theories solver, built in as a part of a generic graph query engine, for performing graph and hypergraph pattern matching (isomorphic subgraph discovery).
  • An implementation of a probabilistic reasoning engine based on probabilistic logic networks (PLN).
  • A probabilistic genetic program evolver called Meta-Optimizing Semantic Evolutionary Search, or MOSES, originally developed by Moshe Looks who is now employed at Google.
  • An attention allocation system based on economic theory, ECAN.
  • An embodiment system for interaction and learning within virtual worlds based in part on OpenPsi and Unity.
  • A natural language input system consisting of Link Grammar and RelEx, both of which employ AtomSpace-like representations for semantic and syntactic relations.
  • A natural language generation system called SegSim, with implementations NLGen and NLGen2.
  • An implementation of Psi-Theory for handling emotional states, drives, and urges, dubbed OpenPsi.
  • Interfaces to Hanson Robotics robots, including emotion modeling via OpenPsi.

Useful Links

OpenCog Home
GitHub
OpenCog Wiki

Source: https://dzone.com