Claim adjudication, the process of determining the financial liability of a claim by the insurance company, is quite complex and time-consuming. Adjudication can be quick if the received claim is clear to the dot, in the sense that all the information is accurate and the claim is within the limits of the policy. But, as with all things in life, this is never the case.(more…)
A large variety of fraud patterns combined with insufficient data on fraud makes insurance fraud detection a very challenging problem. Many algorithms are available today to classify fraudulent and genuine claims. To understand the various classification algorithms applied in fraud detection, I did a comparison using vehicle insurance claims data.(more…)
Research shows that children primarily learn languages by observing patterns in the words they hear. Computer scientists are taking a similar approach to train computers to process human language.
Imagine that you are working on machine translation or a similar Natural Language Processing (NLP) problem. Can you process the corpus as a whole? No. You will have to break it into sentences first and then into words. This process of splitting input corpus into smaller subunits is known as tokenization. The resulting units are tokens. For instance, when paragraphs are split into sentences, each sentence is a token. This is a fairly straightforward process in English but not so in Malayalam (and some other Indic languages).(more…)
Artificial Intelligence (AI) powers several business functions across industries today, its efficacy having been proven by many intelligent applications. Of the lot, chatbots are perhaps the most well-known. From healthcare to hospitality, retail to real estate, insurance to aviation, chatbots have become a ubiquitous and useful feature. But how are these chatbots created? Let’s take a look at the architecture of a conversational AI chatbot.(more…)
In the field of natural language processing (NLP), data is king. The more data you have, the better your results. Most new research is freely accessible these days and, thanks to the cloud, there is unlimited computing power at our disposal. What keeps an NLP researcher from achieving state-of-the-art results despite this is the lack of good data.(more…)
Small businesses thrive on in-store customers. When they reopen post-lockdown, a major challenge would be ensuring the safety of their staff and customers. Sanitizing and limiting shop occupancy are important safety measures but so is social distancing. How can small shops, with their limited resources, monitor their customers and enforce social distancing?
Object detection in real-time is a potential solution.(more…)
Let’s start with the obvious question, what is a tokenizer? A tokenizer in Natural Language Processing (NLP) is a text preprocessing step where the text is split into tokens. Tokens can be sentences, words, or any other unit that makes up a text.
Every NLP package has a word tokenizer implemented in it. But there is a certain challenge associated with Malayalam tokenization.(more…)
Natural Language Processing (NLP) is a field within Artificial Intelligence (AI) that allows machines to parse, understand, and generate human language. This branch of AI can be applied to multiple languages and across many different formats (for example, unstructured documents, audio, etc.).
Considering that the NLP market is anticipated to be worth $13.4 billion in 2020, it is worth delving deeper into this field of AI.
This article seeks to explain first how NLP works, followed by how it is used, and what the future looks like for this exciting area of AI.(more…)