Bhusan Chettri’s Launched Tutorials Series of AI, Machine Learning, Deep Learning and Their Interpretability – EIN News

npressfetimg-6926.png

Dr. Bhusan Chettri

Dr Bhusan Chettri who earned his PhD from Queen Mary University of London aims at providing an overview of Machine Learning and AI interpretability.

LONDON, UNITED KINGDOM, September 24, 2022 /EINPresswire.com/ — Dr Bhusan Chettri who earned his PhD from Queen Mary University of London aims at providing an overview of Machine Learning and AI interpretability. For same Bhusan Chettri has Launched Tutorials Series of AI, Machine Learning, Deep Learning and Their Interpretabilities.

In his first tutorial, Bhusan Chettri is focussed on providing an in-depth understanding of IML from multiple standpoint taking into consideration different usages (use-cases), different application domains and emphasising why it is important to understand how a machine learning model demonstrating impressive results make their decisions. The tutorial also discusses if such impressive results are trustworthy to be adopted by humans for use in various safety-critical businesses for example: medicine, finance and security. Visiting first part of this tutorial series of AI, Machine Learning, Deep Learning and their Interpretability on his official website will give a better idea.

Bhusan Recently published his second tutorial, where he seems providing an overview of Interpretable Machine Learning (IML) a.k.a Explainable AI (xAI) taking into account safety-critical application domains such as medicine, finance and security. The tutorial talks about the need for explanations from AI and Machine Learning (ML) models by providing two examples in order to provide a good context about the IML topic. Finally, it describes some of the important concepts a.k.a criterias that any ML/AI model in safe-critical applications must satisfy for their successful adoption in real- world setting. But, before getting deeper into this edition. It is worth revisiting briefly the first part of this tutorial series of AI, Machine Learning, Deep Learning and their Interpretability.

Part-1 mainly focussed on providing an overview about various aspects related to AI, Machine Learning, Data, Big-Data and Interpretability. It is a well known fact that data is the driving fuel behind the success of every machine learning and AI applications. The first part described how vast amounts of data are generated (and recorded) every single minute from different mediums such as online transactions, use of different sensors, video surveillance applications and social media such as Twitter, Instagram, Facebook etc. Today’s fast growing digital age that leads to generation of such massive data, commonly referred as Big Data, has been one of the key factors towards the apparent success of current AI systems across different sectors.

The tutorial also provided a brief overview of AI, Machine Learning, Deep Learning and highlighted their relationship: deep learning is a form of machine learning which involves use of artificial neural network with more than one hidden layers for solving a problem by learning patterns from training data; machine learning involves solving a given problem by discovering patterns within the training data but it does not involve use of neural networks (PS: machine learning using neural networks is simply referred as deep learning); AI is a general terminology that encompasses both machine learning and deep learning. For example, a simple chess program which involves a sequence of hard-coded if-else rules defined by a programmer can be regarded as an AI which does not involve use of data i.e there is no data-driven learning paradigm. To put it in simple terms, deep learning is a subset of machine learning and machine learning is a subset of AI.

The tutorial also briefly talked about the back-propagation algorithm which is the engine of neural networks and deep learning models. Finally, it provided a basic overview of IML stressing their need and importance towards understanding how a model makes a judgment about a particular outcome. It also briefly discussed a Post-hoc IML framework (that takes a pre-trained model to understand their behavior) showcasing an ideal scenario with a human in a loop for making the final decision of whether to accept or reject the model prediction or a particular outcome.

In recent tutorial Bhusan Chettri provided an insight on xAI and IML taking into consideration safe-critical application domains such as medicine, finance and security where deployment of ML or AI requires satisfaction of certain criterias (such as fairness, trustworthiness, reliability etc). To that end, Dr Bhusan Chettri who earned his PhD in Machine Learning and AI for Voice Technology from QMUL, London described why there is a need for interpretability on today’s state-of-the-art ML models that offer impressive results as governed by a single evaluation metric (for example classification accuracy). Bhusan Chettri elaborate this in detail by taking two simple use cases of AI systems: wild-life monitoring (a case of dog vs wolf detector) and automatic tuberculosis detector. He further detailed how biases in training data can affect models from being adopted in real-world scenarios and that understanding training data and performing initial data exploratory analysis is equally crucial so as to ensure models behave reliably in the end during deployment. Stay tuned for more on the topics of explainable AI. The next edition of this series shall discuss different taxonomies of interpretable machine learning. Furthermore, various methods of opening black-boxes: towards explaining behavior of ML models shall be described. Stay Tuned to his website for more updates.

Bhusan Chettri
NA
email us here

You just read:

News Provided By

September 24, 2022, 20:21 GMT


EIN Presswire’s priority is source transparency. We do not allow opaque clients, and our editors try to be careful about weeding out false and misleading content.
As a user, if you see something we have missed, please do bring it to our attention. Your help is welcome. EIN Presswire, Everyone’s Internet News Presswire™,
tries to define some of the boundaries that are reasonable in today’s world. Please see our
Editorial Guidelines
for more information.

Submit your press release

Source: https://www.einnews.com/pr_news/592610939/bhusan-chettri-s-launched-tutorials-series-of-ai-machine-learning-deep-learning-and-their-interpretability

npressfetimg-1204.png
Machine learning

Machine learning models development for shear strength prediction of reinforced concrete beam: a comparative study … – Nature.com

Siddika, A., Al Mamun, M. A., Alyousef, R. & Amran, Y. H. M. Strengthening of reinforced concrete beams by using fiber-reinforced polymer composites: A review. J. Build. Eng. 25, 100798 (2019).

Google Scholar 

<p class="c-article-references__text" …….

Read More
npressfetimg-1131.png
Machine learning

Organic reaction mechanism classification using machine learning – Nature.com

Simonetti, M., Cannas, D. M., Just-Baringo, X., Vitorica-Yrezabal, I. J. & Larrosa, I. Cyclometallated ruthenium catalyst enables late-stage directed arylation of pharmaceuticals. Nat. Chem. 10, 724–731 (2018).

Article 
CAS 

Google Scholar 
…….

Read More
npressfetimg-1058.png
Machine learning

Generative AI: how will the new era of machine learning affect you? – Financial Times

Copyright The Financial Times Limited 2023. All rights reserved.

Follow the topics in this article

Markets data delayed by at least 15 minutes. © THE FINANCIAL TIMES LTD 2023. FT and ‘Financial Times’ are trademarks of The Financial Times Ltd.The Financial Times and its journalism are subject to a self-regulation regime under the FT Editoria…….

Read More