6 Open Source Machine Learning Frameworks and Tools

Postat pe - Modificat ultima dată pe

Like every other sphere of technological development, Machine Learning (ML) has also advanced at a fantastic pace. This can be measured by not just an increase in the number of products based on ML or offering ML capabilities but also an increase in the methodologies and frameworks that are used with ML. Most of these are primarily supported by open source projects.

In today’s development arena, researchers and developers that begin a new project can be overwhelmed by the sheer choice of frameworks available to them. These tools are considerably different to each other, offer a varied set of capabilities and strike a new balance between the established while keeping in touch with innovations.

Listed below are 6 of the popular open source ML frameworks, an overview of their offerings and the use cases they are best suited for.

TensorFlow

iAk1n5HqTcH2hM1liZ9496_IfSsLJClJ5oVVZoxiXBXuohuTv4b9OwSrskNOvNymxyiyjEi8OAHcAz1mAXE4tYj4PXZ5pzn2kDWmy2bitkk3wSIgolGjX-hTWLDwhllthgiiVw-C

TensorFlow is an open source framework released two years ago. A part of Google’s internal framework supporting deep learning and artificial networks, it facilitates the building of neural networks with the aid of stacking mathematical operational for NNs in the form of a computational graph.

Once the model is trained, TensorFlow figures out the way to propagate inaccuracies via the computational graph. Given time, it learns the fundamental parameters. TensorFlow is generic enough and can be utilized for creating any variation of a network – whether image or text classifiers, to increasingly sophisticated models such as GANs or Generative Adversarial Neural Networks. TensorFlow can also be used to build other frameworks on top of it.  

TensorFlow has gained popularity and has become the go-to industry standard for deep learning. Additionally, it boasts of the most diverse and active ecosystem of tools and developers among DL frameworks.

Given that deep learning, frameworks are evolving almost every day; it is of added importance to understand that your Stack Overflow search will produce results. The good thing is that nearly every bug and obstacle within TensorFlow can be solved using Stack Overflow.

The pitfalls that a lot of users face is the complexity of usage. This is partly down to the fact the framework itself is customizable, and each computation in the NN needs to specially programmed. While this can be a benefit for some, if users are not comfortable with each detail of the model they are attempting to build, TensorFlow can prove to be quite cumbersome.

Keras

SVMTEepZoMxxlJD5yHZb3FJUpnscGrKBTaOT0RFw8HjupdjxJO_bejDajMNt0J_5aYhnjM7M7w4tObK5p33QPOMMCTmP44umv-oQP7AfAQMl6pDIGNkOLPJlFQT6wejQ3SOq_g_W

Keras is an advanced interface used for deep learning platforms similar to Google’s TensorFlow discussed above. Created by Francois Chollet in 2015, Keras grew to become to the second most popular DL framework after TensorFlow. This is called out in its mission – “to make drafting DL models as easy as writing new methods in Python.”

For those who have faced some difficulty using TensorFlow, Keras is usually a welcome relief. Keras can create common neuron layers, choose metrics, optimization methods, and error functions as well as get the model trained rapidly and efficiently.

The main advantage of Keras is in its modularity. Every neural building block is available to use in the library and users can easily compose them over one other, thus creating a more customized and elaborate model.

SciKit-learn

Given the recent expansion of Deep Learning, it is easy to have the misconception that the more traditional Machine Learning models are redundant. However, this is far from the truth. Some everyday ML tasks are still solved using the more classic or traditional models that have been the industry standard before the so-called, Deep Learning boom. Also, these traditional models are refreshingly simpler and easier to use.

Given that DL models are excellent at capturing patterns, it can tend to be hard to explain what their learning methodology is. Additionally, very often they tend to be expensive to train and then to deploy. The more regular problems of dimensionality reduction, clustering as well as feature selection can all be easily solved with the help of the more traditional models.

This is where SciKit-learn comes in. SciKit-learn is a framework backed by academia and has just celebrated its 10th year in the field. It employs practically every machine learning model available today, right from logistic regression to linear, from random forests to SVM classifiers. SciKit-learn comes with a comprehensive toolbox of preprocessing methods including text transformations, dimensionality reduction, and others.

SciKit-learn is perhaps one of the greatest achievements of the Python community. The user guide for SciKit-learn is almost as good as a textbook for machine learning and data science. Even a small startup needing to jump into the deep learning fray should consider choosing SciKit-learn as a starting for similar results but at a fraction of the development time.  

Apache Spark

Apache Spark forms a significant part of IBM’s Deep Learning capabilities and is designed keeping in mind cluster computing. It contains MLlib, which is a distributed machine learning framework. MLlib works in conjunction with Spark’s distributed memory architecture. MLlib comes with a vast number of commonly used statistical tools as well as machine learning algorithms.

Another significant component of IBMs platform is SystemML. SystemML consists of a high-level, declarative R-like syntax which has inbuilt statistical functions, linear algebra primitives and can construct specific to machine learning. Its scripts can execute from within one SMP enclosure. They can also run across many different nodes within a distributed computation using Apache Spark or Hadoop’s MapReduce.

SystemML and Spark work well together so much so that Spark can collect data using Spark Streaming and renders it specific representation, while, at the same time, SystemML can determine the optimum algorithm that can be used to analyze this data considering the layout and configuration of the cluster.

One of the platform’s most standout feature is that it can automatically optimize data analysis based on the clustering characteristics and data. This makes sure that the model is scalable and efficient. Competing deep learning frameworks are not able to take similar optimization decisions for the user without outside input.

Edward

Perhaps one of the most intriguing and promising developments the community has witnessed in a while is Edward.

Created by Dustin Tran, a researcher at Google, along with a group of AI contributors and researchers, Edward is built atop TensorFlow and combines three areas – Machine Learning, Bayesian Statistics, Deep Learning, and Probabilistic Programming.

Edward enables users to create Probabilistic Graphical Models or PGMs and can be used to build Bayesian neural networks together with models that can be projected as a graph while using probabilistic representations.

Edward’s uses are limited to the more advanced AI models rather than to real-world applications, however since PGMs are becoming more useful concerning AI research, it is safe to assume that the near future holds more practical uses for Edward.

Lime

Among the toughest challenges when it comes to ML is debugging its internal representations, or explaining what exactly the model has learned.

Lime is a Python package that is easy to use and explains the internal representations intelligently. It takes a constructed model as input, runs a second ‘meta’ approximator based on the learned model and approximates the behavior of the model for multiple data. The output is essentially an explanation of the model and identifies what part of the input assisted the model in reaching a decision and which part did not.

Once this conclusion is reached, the results are displayed in an accessible, legible and interpretable format. As an example, let us assume that a text classifier might look like this and highlights those words that assisted the model in concluding, and their respective probabilities -

Lime works hand in hand with SciKit-learn models similar to any other classifier model that accepts raw text or arrays and outputs a probability of each class.

Lime can also explain the classification of images. In the screenshot below, Lime can tell what areas of the image are used by the model for classifying the image as ‘cat’ viz. the green area, and what parts of the image had a negative weight which caused it to lean toward a classification of ‘dog’.

0s52fi6Ntiya0M08pI2LN7Nzq0_3-Hs3abq_BL3KqUM3kiO30IOCsOvgXEiUQaUZFm_j6WEmd5dQjbXOQiTtNKvv3T927kmzGF6Vi4PgUS54KSpsaaOjyoiNNU-EE40F7VdGCm0K

Conclusion

This article lists some of the promising tools and significant frameworks that are being used by machine learning engineers and researchers to choose from when they approach a project, either when trying to improve an existing structure or attempting to build one from scratch.

Given the inevitable practical advancements and developments in machine learning, the community hopes to witness increasing sophisticated tools and frameworks in this field.

Următorul articol

Securing Your Codebase Against Vulnerabilities - 5 Best Practices