When William Shakespeare wrote “To be or not to be” for Prince Hamlet to speak and express his contemplation for embracing the universal truth; little did he know that he would be quoted in various different contexts for different types of effects. A coward soldier saying; “to flee or not to flee”; a conniving trader evaluating an unsuspecting customer; “to fleece or not to fleece”; the colonial masters strategizing their exit; “to free or not to free”. And as guessed by you; a data scientist upon stumbling on a couple of interesting variables; “to correlate or not to correlate”.
The first step in predictive modeling is defining the problem. Once done, historical data is identified, and the analytics team can now begin the actual work of model development. In this blog, we touch on the business factors that influence model development. If you find this interesting and want a deeper dive, you’ll have the opportunity to download our whitepaper that goes into more detail on this topic.
If a life insurer wants to build a predictive model, how should they go about it? In this article, we explore the factors that need to be considered before beginning actual model development. We will do this by using the example of predictive models for improving persistency. (Improving persistency for a life insurer means increasing the volume of business they retain.)
Companies design application processes to provide the best possible experience for their customers. These processes rely on application and customer-originated events to function. These events and their outcome form the basis of the customer’s experience. Therefore, event-driven philosophy is an ideal way for companies to measure customer experience.
I recently saw a tweet from Mat Velloso - “If it is written in Python, it’s probably machine learning. If it is written in PowerPoint, it’s probably AI.” This quote is probably the most accurate summarization of what has happened in AI over the past couple of years. A few months back, The Economist shared the chart below that shows the number of CEOs who mentioned AI in their Earnings calls. Towards the end of 2017, even Vladimir Putin said: “The nation that leads in AI ‘will be the ruler of the world.” Beyond all this hype, there is a lot of real technology that is being built. So how is 2019 going to look for all of us in the insurance world?
The field of artificial intelligence has always envisioned machines being able to mimic the functioning and abilities of the human mind. Language is considered as one of the most significant achievements of humans that has accelerated the progress of humanity. So, it is not a surprise that there is plenty of work being done to integrate language into the field of artificial intelligence in the form of Natural Language Processing (NLP). Today we see the work being manifested in likes of Alexa and Siri.
One of the challenges insurers face when implementing any new cloud-based application into their workflow is the integration of both internal and external data. Gaining access and permission to use internal data can be the first hurdle. Adding the requirement to format the data in a specific format can be a show-stopper.
Insurers have a near-constant stream of unstructured data at their disposal that can be used to drive growth by improving policyholder retention and identifying cross-sell and upsell opportunities. One of the challenges for insurers is sorting through this mountain of unstructured data quickly to gain an accurate understanding of the sentiment of their customers in real time.
Enterprise applications belong to a vibrant ecosystem and consequently the data they generate is large and varied. Enterprises both benefit and suffer from this nature of application and data.Whenever a new application is to be deployed in an enterprise that integrates with the applications in the ecosystem, the precondition is an 'expansive data definition with referential value' on day 1 to start integration. Traditionally, this approach to data integration involves identifying a target data structure, and force fitting data from all sources into it. This is done to ensure a 'seamless' integration - never mind the loss of data considered irrelevant.
According to the Coalition Against Insurance Fraud, at least $80 billion in fraudulent claims are made annually in the U.S across all lines of insurance. This translates more than $400-$700 per year in increased premiums for each American family.