Deep AI & Neural Networks
Wednesday, October 27, 2021
How does a machine classify different species of animals just by looking at an image? Computer Vision is the branch of Machine learning that does the magic and deep learning helps in achieving it. In this session, I will cover an introduction to Computer Vision, Deep Neural Networks and show how to build a serverless image classification application using Microsoft Azure Functions and ML.Net framework. The implementation will be in C# language.
Enterprises with AI experience face an upward struggle, as research demonstrates that only 53% of AI projects make it from prototype to production. This issue can largely be attributed to difficulties navigating the cumbersome deep learning lifecycle given that new features and use cases are stymied by limited hardware availability, slow and ineffective models, wasted time during development cycles, and financial barriers. AI developers need better tools that examine and address the algorithms themselves; otherwise, they will keep getting stuck. However, there just is not one tool available on the market that gives developers production-grade performance while still being flexible and user-friendly. In this talk, Yonatan presents an innovative and unique solution to this problem- using AI to craft the next generation of AI. Yonatan developed an Automated Neural Architecture Construction engine (AutoNAC), the first commercially viable Neural Architecture Search (NAS) technology set to unlock a whole set of AI opportunities for cloud, on-prem, edge deployments, and more. His engine is capable of crafting state-of-the-art deep neural networks that can outperform top-notch open-source neural nets currently available on the market.
In recent years interest in sparse neural networks has steadily increased, accelerated by NVIDIA’s inclusion of dedicated hardware support in their recent Ampere GPUs. Sparse networks feature both limited interconnections between the neurons and restrictions on the number of neurons that are permitted to become active. By introducing this weight and activation sparsity, significant simplification of the computations required to both train and use the network is achieved. These sparse networks can achieve equivalent accuracy to their traditional ‘dense’ counterparts but have the potential to outperform the dense networks by an order of magnitude or more. In this presentation we start by discussing the opportunity associated with sparse networks and provide an overview of the state-of-the-art techniques used to create them. We conclude by presenting new software algorithms that unlock the full potential of sparsity on current hardware platforms, highlighting 100X speedups on FPGAs and 20X on CPUs and GPUs.
AI is a term that has been thrown around in the cybersecurity industry for quite some time. The common components typically referenced when talking about AI are Machine learning and Deep Learning, but what are the differences? When it comes to cybersecurity, AI can be a huge leap forward in combating cyberattacks, but not all solutions are the same. If AI could be the silver bullet, why are today's AI solutions not working? Many of the traditional Machine Learning cybersecurity solutions currently available are causing massive operational challenges as they are not adequately combating the ever-evolving and sophisticated threats. Detection and response-based solutions (EDR) are insufficient because they typically can take 10 minutes or more to identify a threat detected in the environment. It takes sub 3 seconds to infect and start encrypting a system; that is why time is of the essence. You have to prevent the infection and damage it can inflict before it takes root, executes, and spreads. One important item of note is the emerging trend of adversarial machine learning being leveraged by cybercriminals; how can this be combated? Executives and security leaders need to start adopting a preventative approach to cybersecurity utilizing the latest in cutting-edge security solutions, which is only made possible through the use of AI and, more importantly, the use of Deep Learning.The great news is that AI technologies are advancing. Deep learning is proven to be the most effective prevention cybersecurity solution to date, resulting in unmatched prevention rates with proven lowest false positive rates. As organizations evaluate new technologies, a firm understanding of the differences, challenges, and benefits of all AI solutions is a must. Therefore, educational advancements in machine learning and deep learning are well warranted.
Thursday, October 28, 2021
NLP is a key component in many data science systems that must understand or reason about text. This hands-on tutorial uses the open-source Spark NLP library to explore advanced NLP in Python. Spark NLP provides state-of-the-art accuracy, speed, and scalability for language understanding by delivering production-grade implementations of some of the most recent research in applied deep learning. It's the most widely used NLP library in the enterprise today. You'll edit and extend a set of executable Python notebooks by implementing these common NLP tasks: named entity recognition, sentiment analysis, spell checking and correction, document classification, and multilingual and multi domain support. The discussion of each NLP task includes the latest advances in deep learning used to tackle it, including the prebuilt use of BERT embeddings within Spark NLP, using tuned embeddings, and 'post-BERT' research results like XLNet, ALBERT, and roBERTa. Spark NLP builds on the Apache Spark and TensorFlow ecosystems, and as such it's the only open-source NLP library that can natively scale to use any Spark cluster, as well as take advantage of the latest processors from Intel and Nvidia. You'll run the notebooks locally on your laptop, but we'll explain and show a complete case study and benchmarks on how to scale an NLP pipeline for both training and inference.