Mastercard successfully built a deep learning-based customer propensity recommendation system with Apache Spark and their credit card transaction data. Another example of applying deep learning to improve customer experience is Mastercard, a first-tier global payment solution company. They successfully automated 30% to 50% of the potential user’s enquiries by applying Recurrent Neural Networks (RNNs) on their users’ sequential event data.Ĭustomer experience is an important topic for most financial institutions. A good example for this point comes from Monzo bank, a fast-growing UK-based “challenger bank”, which reached its 3 million customers in 2019. This indicates deep learning has its position in many business areas in financial institutions. Morgan summarized six initiatives for their machine learning applications: Anomaly Detection, Intelligent Pricing, News Analytics, Quantitative Client Intelligence, Smart Documents, Virtual Assistants. Machine learning and deep learning have many applications in the financial industry. In this section, we’ll walk through several DL use cases for different industries using Scala. Deep Learning use cases with Apache Spark Also most of the Deep Learning Frameworks (PyTorch, TensorFlow, Apache MXNet…) do not have good support for the Java Virtual Machine (JVM), which Spark runs on. There are libraries that try to solve this problem such as TensorFlowOnSpark, Elephas, and CERN, but most of them are engine-dependent. However, there is no official support for DL in Spark. Many developers are looking for an efficient and easy way to integrate their deep learning (DL) applications with Spark. Spark is integrated with high-level operators and libraries for SQL, stream processing, machine learning (ML), and graph processing. Apache Spark's popularity comes from the ease-of-use APIs and high-performance big data processing. Learn More.Īpache Spark has emerged as the standard framework for large-scale, distributed, data analytics processing. Achieve extreme scale with the lowest TCO. ScyllaDB is the database for data-intensive apps requiring high throughput + low latency. Enabling deep learning frameworks to integrate with ETL jobs allows for more streamlined ETL/DL pipelines. Creating a cluster of GPU machines and using Apache Spark with DJL on Amazon EMR to leverage large-scale image classification in Scala.ĭata processing and deep learning are often split into two pipelines, one for ETL processing, and one for model training.DeepJavaLibrary (DJL), a Deep Learning framework implemented in Java, which aims to make popular open source deep-learning frameworks accessible to Java/Scala developers.Speeding up end-to-end ETL, ML, DL pipelines with Apache Spark and NVIDIA GPU computing.Deep learning use cases with Apache Spark.In this post, you learn about the following: In this tutorial we share how the combination of Deep Java Learning, Apache Spark 3.x, and NVIDIA GPU computing simplifies deep learning pipelines while improving performance and reducing costs. Deep Learning training and Inference is compute intensive and typically performed on GPUs, while large-scale data engineering was typically programmed in Scala on multi-CPU distributed Apache Spark.The adoption learning curve was steep and required development of internal technical expertise in new programming languages (e.g., Python) and frameworks.However, until recently, there were multiple difficulties with implementing deep learning in enterprise applications: Many large enterprises and AWS customers are interested in adopting deep learning with business use cases ranging from customer service (including object detection from images and video streams, sentiment analysis) to fraud detection and collaboration.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |