Are you looking to reduce your underwriting expenses and transform your claims process? Are technological advancements and cut-throat competition driving you to analyze and optimize your existing processes?
For a long while, process automation has been at the forefront in the insurance landscape, but now, the future of insurance will be shaped by behavioral intelligence and predictive analytics. In order to maintain a competitive edge, you must transform your traditional, rule-based framework into a data-driven, intelligent, and predictive system.
Let’s take an example of a modern insurance company that is disrupting the industry landscape. The company offers homeowners and renters insurance. It targets tech-savvy millennials—people with basic coverage needs, looking for a completely digital experience.
They hit the nail on the head by building a business model powered by artificial intelligence (AI) and predictive behavioral analytics. The insurer uses behavioral intelligence to measure their customers’ “digital body language” when they begin the application process all the way through to filing a claim. This incredible amount of data is leveraged to provide a world-class experience to its customers.
So, if you’re looking to transform your processes and tap into your target market share, predictive analytics is the answer.
Here are five areas where predictive analytics is projected to be influential:
Underwriting
Predictive analytics acts as underwriters’ virtual assistants. It analyzes historical data to rank risk parameters according to their significance and weightage. It also provides data-driven reports in a snapshot for efficient decision-making
Insurance Pricing
With predictive analytics, you can dynamically adjust quoted premiums. By monitoring variables—such as claim history in an area, construction costs, and weather patterns—you can predict risk and set prices more accurately
Claims Processing
Decision-making support through analytics can help you in accurately adjudicating claims. This will also facilitate in expediting the process and reducing errors
Preventing Claims Fraud
Input claim parameters—such as a surge in claims during a specific month, previous matching claim amounts, the same surveyor being involved in multiple claims from the same area, etc.—can be compared with past records and an alert can be raised if anything unusual is detected. You must take advantage of any available data and convert it into actionable intelligence
Improving Customer Loyalty
Predictive analytics can be used to anticipate the needs of your customers by analyzing their history and behavior. This information can also help you offer personalized products, better suited to their specific needs
Transform Data into Future Insights!
It’s time for you to focus on what the future holds for your organization. Predictive analytics has never been more important for insurers, and time is of the essence. Technology, and implementing it in a timely manner, is the best way for you to boost customer loyalty, increase market share, and thrive in a highly competitive market.
To learn more about how Newgen’s predictive analytics helps insurers, like you, contact us here.
Data: It’s the backbone of any maintenance program. It’s what you use to measure success. It tells you what assets need more attention and how that will impact your schedule. It’s what helps you survive maintenance audits unscathed. In short, data is the language that helps you tell the story of your maintenance team.
But not all data is created equal. And it could be that yours is failing to say what it needs to. Jason Afara, a Senior Solutions Engineer at Fiix, experienced this when he was a maintenance manager.
“We had more technicians than we did CMMS licenses, so we had people logging in after they had already completed a work order, just trying to fill in all the details they could remember,” he says. “We were always trying to catch up, and that impacted our credibility.”
The cost of bad maintenance data
That’s just it—when your data is off, it’s harder to go to bat for your team. It’s not as easy to justify buying a new piece of equipment, trade production time for maintenance or make a new hire if the data isn’t there to support that request.
It can impact your team on a day-to-day basis as well. For example, a technician might wait until the end of the day to log completed work. This gap in time could lead them to misremember how long it took them to do a job. Maybe they round down. No big deal, right? Except it is.
That one mistake could cause a domino effect. The next time you go to schedule that job, you plan less time for it. Now the technician is rushing to complete the work, increasing risk for both them and the machine. You’ll also lowball the cost of labor hours in your budget, putting you in a tricky situation with your finances.
Let’s dive into where your data can go wrong, and how you can audit it to start steering things in the right direction.
Where bad maintenance data begins
Bad data is often born from the best intentions. That makes it hard to spot. But there will always be a silver lining to go along with these issues—you have a data-driven culture. You know the numbers are key and the insight you get from them is even more valuable. That’s the most important ingredient for finding and eliminating bad data.
Here are two aspects of maintenance programs that most often contribute to bad or incomplete data.
Trying to boil the ocean
A lot of maintenance teams try to do too much, too soon with their data. Having the ability to track things is great, but if you don’t have a well-thought-out plan in place for what you’re going to measure—and why—you’ll run into problems.
It’s an easy trap to fall into. The advent of IIoT technology, like sensors that track every second of an asset’s behaviour, has introduced seemingly infinite ways to capture data. The trouble for maintenance managers doesn’t come from having too much data, but from not knowing how to pull out the data that matters.
Brandon De Melo, a Customer Success Manager at Fiix, puts it this way, “Let’s say you have a sensor that’s pulling machine data. That’s great, but you can’t stop there. You have to consider all the things that factor into that data, like downtime or other external factors that could affect it.”
Not thinking critically about metrics
Every maintenance team is held to certain KPIs—but are they the right ones? As Stuart Fergusson, Fiix’s Director of Solutions Engineering, points out, it can be easy to get caught in a cycle of tracking a number like labour hours simply because it’s the metric that comes from your boss (or their boss).
It’s important to take a critical lens to maintenance metrics and really think about whether they should be measured.
“At the end of the day, you need to be measuring the metrics that support your department,” says Fergusson. “Not enough people understand why they’re measuring what they’re measuring.”
Where bad maintenance data lives
We know what contributes to bad data, but where does it show up? Bad data is really good at blending in with clean data, so it’s not always obvious. But knowing the telltale signs of inaccurate information will help you spot it without pouring over dozens of reports. Here are the most common places where you can find bad maintenance data.
In your storeroom
Bad data can lurk alongside bearings and motors on the shelves of your storeroom. There are a few ways this can happen.
Firstly, it’s easy to have an out-of-date inventory count if you have obsolete parts sitting on shelves. If you don’t check in on your inventory to make sure it matches up with what’s actually available, you’ll run into problems when you have to pay for a part you weren’t expecting.
And then there’s the danger of fudging the numbers to make the bottom line look better.
“Let’s say it’s near the end of the month and you have to replace a $3,000 part,” says Afara.
“Some maintenance managers will say, ‘You know what? Let’s just wait for that repair so it actually hits our books next month.’ It turns into a bit of a game.” This hesitation can negatively impact the whole business if what’s in the books is valued over what’s actually needed to improve production.”
In your preventive maintenance schedule
Every maintenance team has their regular PMs—but how many of them are actually necessary?
“Maintenance can get really emotional really quickly,” says Afara. “You’ll have what’s called an emotional PM, where the team is doing a regular check just because there was a failure six plant managers ago and no one’s changed it.”
When maintenance teams inherit PMs, it’s easy not to question it, but it’s easy to see how things can snowball and tell an inaccurate story of which work actually needs to be done.
In your work order and asset histories
It doesn’t take much for data to go haywire when documenting work. Attention tends to go to the wrong places when a plant’s priorities are out of sorts.
“What commonly happens is, there’s such a focus on technician time,” says Afara. “A message comes from the top that every minute needs to be accounted for, and the result is that technicians are just making up time on work orders to show that they’ve done the eight hours they’ve been asked to.”
As we touched on earlier, the root problem here is a lack of specific planning. You’re worrying about the metric at the expense of strategy, which results in data that doesn’t tell the truth and can’t be used to drive real change.
In your reports
Every data set has its spikes and dips. The important part is how you’re making sense of the fluctuations that show up in your maintenance reports.
“Do you actually have anything in place to explain why, for example, a drop can happen in September and then happen again in January?” says De Melo.
Without critical analysis or an understanding of what contributed to an anomaly in the data, tracking those fluctuations is useless. You need to understand what happened before you can begin to understand what you could have done differently.
How to audit maintenance data
Now that we have a clearer picture of where maintenance data can go wrong, how can you start fixing it?
The answer will be different for each team, but the right place to start is wherever you’re having a problem with no way to explain why you’re having it.
“Let’s say you can’t figure out why you have so much unplanned downtime, and looking at the data isn’t helping you at all,” says De Melo.
“In this scenario, you’d want to talk to the production manager and start asking questions like, ‘How is this being tracked? Is there a system in place?’ There will always be a process of tracking down the right information, but you can’t just sit there and just twiddle your thumbs, hoping that the answer is going to come to you.”
In terms of creating a data audit checklist, again, your best bet is to approach it from a strategic perspective.
“Sit with some key stakeholders, like plant managers and technicians, and do some brainstorming around what you want to improve and understand better,” says De Melo.
“Once you know what you’re looking for, you can build a checklist that makes sense.”
The best maintenance data is data with a purpose
Taking a critical and thoughtful approach to auditing your maintenance data ensures that everything you’re tracking and analyzing is being examined for a reason. This helps you understand how each piece of data is connected. Then you can make actual improvements to your maintenance program instead of making smaller, less impactful changes around the margins.
“If you really understand your maintenance activity, everything else is just going to flow in behind it,” says Fergusson.
“Your plant leadership may not understand maintenance backlog or OT, but when you tell them that delaying a maintenance window is going to cost another $250,000 in our plant maintenance budget because of X, Y, Z, and you have the right data to back it up, they’ll listen.”
When all is said and done, the data is the easy part.
“If you have the culture and the metrics and the right people and processes in place to track everything, and you just don’t have the actual data, no problem. You can get that up and running in a week,” says Fergusson.
“More often, though, it’s the opposite. You have all the data, it’s all flowing somewhere, and everybody’s looking at different pieces of it, but none of it’s building to a true story.”
Social distancing measures taken by responsible employers have greatly increased the number of employees working remotely. Even in the midst of this crisis, some companies and their employees can enjoy the objective benefits of not having to waste time and money on long commutes. At the same time, plenty of businesses really didn’t have the structure in place to support a vast, full-time work-at-home workforce with the security of business processes they needed.
Remote Workforce Security Challenges During the Coronavirus Outbreak
Because employees or departments scrambled for ad-hoc solutions to remote working, they sometimes sacrificed robust security to get up and running as quickly as possible. Sadly, cybercriminals can also work from home or other remote locations, and many saw the rise in remote workers as an opportunity.
For example, one survey of security professionals found:
A majority of security employees struggled to offer strong security solutions to remote employees.
At the same time, almost half of the respondents reported seeing an increase in phishing attempts.
Most of these corporate security pros had concerns about their ability to scale security, respond to abrupt environmental changes, and the difficultly of controlling employee use of unknown and untested software.
Five Best Security Practices for Remote Employees
With the increase in cyberthreats and the concerns of security professionals in mind, it’s a good idea to consider some best practices to help keep business systems free of threats and just as important, to ensure compliance with rules that govern privacy and security in different industries.
1. Two-Factor Authentication
With two-factor authentication, sometimes called 2FA, users have to finish their login with a code that gets sent to another device, typically a cell phone. It takes a few seconds longer to access the system, but it provides better protection against phishing attacks. One CTO found that this simple measure reduced security problems in his company by almost 40 percent.
2. Use Secure Connections
Obviously, most of these home workers will rely upon their home Wi-Fi connections. Without any other protections, your security will only be as good as whatever the employee’s home internet company, router, and password can provide. To boost security, you might have employees log in through a VPN or other method of encrypting communication between their home device and your corporate systems.
3. Endpoint Security and Monitoring
No matter how well you protect logins and communication, you still can’t always avoid the threat of malicious code entering your system. On your server end, you can employ software to block threats and monitor system usages.
Even though most threats may stem from accidental vulnerabilities, it’s impossible to ignore the rise of inside jobs as a source of risks. Not only will these systems provide a firewall against malicious software, they can also send automatic alerts for unusual data use and provide a clear audit trail just in case something does happen.
4. Develop and Create Clear Security Policies
Even before the coronavirus outbreak, companies grappled with security issues that stemmed from remote workers and the rising use of personal devices.
For example:
In some cases, you may allow personal devices, so long as employees adhere to other security policies. For instance, you may require installation of approved security software and only let employees login to your network through your corporate VPN.
In other cases, you may ask employees in sensitive areas to only use the laptops or other devices that you have issued to them and to only use them in approved ways. For example, you may restrict these company-issued devices to work and not allow employees to use them to watch videos or browse social sites.
In any case, it’s important to develop clear policies. In addition to communicating these rules, you should also ensure that employees understand why they’re important and that they can incur consequences for ignoring them.
5. Deploy Secure Information Systems
Deploying intelligent and robust document and data management systems may not take as much of an effort as you think it will. These systems come designed and built to offer robust security and rule-based access for both in-house and remote workers. They also provide audit trails and guarantee recoverability, so if something suspicious happens, it’s easy to trace the issue to its source and remediate it.
How M-Files Offers the Best Solution for Remote and In-House Employees
Companies that already employed a smart data management system like M-Files didn’t have to worry about an abrupt change from working in a corporate office to a home office.
For example:
Access to documents could already have been set by role, so the people who needed information would have an easy time accessing it, according to their security levels. To others, that same information would be invisible. The right people could view, change, add, or delete information, and others would not even see it exists.
With built-in encrypted access and simple rollbacks for recoveribility, M-Files also has already been certifed as an ISO-27001 Certified Provider. This standard meets the requirements for the most sensitive data and systems.
Besides security, the intelligent features of M-Files can help improve your business processes. To learn how M-Files can help protect your business, employees, and information, schedule a custom demo today.
A recent article published in The Guardian highlighted ‘bias’ on the part of digital forensic examiners when examining seized media. In the original study, the authors found that when 53 examiners were asked to review the same piece of digital evidence, their results differed based on contextual information they were provided at the outset. Interestingly, whilst some of the ‘evidence’ for which they would base their findings was easy to find (such as in emails and chats) other ‘traces’ were not. These required deeper analysis, such as identifying the history of USB device activity.
One of the things that struck me was that the 53 examiners were all provided with a very short brief of what the case was about (intellectual property theft) and what they were tasked to find (or not find), including a copy of a spreadsheet containing the details of individuals who had been ‘leaked’ to a competitor.
This immediately reminded me of my first weeks within the police hi-tech crime unit (or computer examination unit as it was called). I vividly remember eagerly greeting the detective bringing a couple of computers in for examination into suspected fraud. I got him to fill in our submission form – some basic details about the case, main suspects, victims, date ranges, etc. I even helped him complete the section on search terms and then signed the exhibits in before cheerily telling him that I’d get back to him in the next few weeks (this was in the days before backlogs…).
As I returned from the evidence store, I was surprised to find that same detective back in the office being ‘questioned’ by my Detective Sergeant. “John,” as we will call him (because that was his name), an experienced detective with over 25 years on the job, was asking all sorts of questions about the case:
Who were his associates?
What other companies is he involved in?
Does he have any financial troubles?
Is he a gambler?
Did you seize any other exhibits?
Does he have a diary?
How many properties does he own?
The list went on. In fact, it was over an hour before John felt that he had sufficient information to allow the detective to leave. Following the questioning, John took me aside and told me that whilst we used the paperwork to record basic information about the case – it was incumbent on us to find out as much information as possible to ensure that we were best placed to perform our subsequent examination.
My takeway? You can never ask too many questions – in particular, those of the ‘who, where, when’ variety.
HAS DIGITAL FORENSICS CHANGED SINCE THEN?
Given the rapid development in technology since those early days in digital forensics, you would think the way agencies perform reviews of digital evidence would have, well, kept up?
I recently watched a very interesting UK ‘fly on the wall’ TV series (Forensics:The Real CSI) that followed police as they go about their daily work (I do like a good busman’s holiday) and one episode showed a digital forensic examiner tasked to recover evidence from a seized mobile phone and laptop in relation to a serious offence.
“I’ve been provided some case-relevant keywords,” he said, “which the officer feels may be pertinent towards the case.” “Murder, kill, stab, Facebook, Twitter, Instagram, Snapchat … and for those keywords I’ve searched for, there is potentially just under 1,500 artifacts that I’ll have to start scrolling through.”
Wait, what?
“Have I been transported back to the 90s?” I thought as I watched in (partial) disbelief and was again transported back and reminded of John’s sage advice all those years ago about asking lots of questions.
Whilst I understand that the show’s director was no doubt using the scenes to add suspense and tell the story in the most impactful way possible, there is no getting away from the fact that the digital forensic examiner was working with limited information about the case and with some terrible keywords.
Yes, they can (and no doubt did off-camera) pick up the phone to the Officer in the Case (OIC) to ask further questions … surely, the OIC is the one who will see a document or email (that perhaps hasn’t been found by keyword searching) and see a name or address within it and immediately shout “Stop! That’s important!” The OIC will recognize the suspect in a holiday photograph having a beer with another suspect who they swear blind they’ve never met.
FOCUSING ON THE RIGHT EVIDENCE
How does this all tie back into the research I mentioned at the outset? The various ‘traces of evidence’ the examiners were tasked to find were both ‘hidden in plain sight’ and required skilled forensic analysis in order to identify and interpret their meaning. If the digital forensic examiner spends most of their precious time reviewing emails and documents – in the real world – will they have the time to perform the skilled digital forensics work to build the true picture of what happened?
If the OIC is only provided with material to review based on such basic keyword analysis or a couple of paragraphs that detail a very high-level overview into the case – will the smoking gun holiday snap make it into the review set?
Expert commentary in the article suggests that “Digital forensics examiners need to acknowledge that there’s a problem and take measures to ensure they’re not exposed to irrelevant, biased information. They also need to be transparent to the courts about the limitations and the weaknesses, acknowledging that different examiners may look into the same evidence and draw different conclusions.”
A spokesperson for the National Police Chiefs’ Council is quoted saying “Digital forensics is a growing and important area of policing which is becoming increasingly more prominent as the world changes … We are always looking at how technology can add to our digital forensic capabilities and a national programme is already working on this.”
Nuix is keen to support this national program and I truly believe that our investigator-led approach to reviewing digital evidence by using Nuix Investigate is the way toward helping to put the evidence into the hands of those who are best placed to make sense of it (the easier ‘traces’ as per the study). Doing so allows the digital forensic examiners to focus on the harder ‘traces’ – such as undertaking deep-dive forensic analysis or ascertaining the provenance of relevant artifacts.
Please note. No digital forensic examiners were harmed in the writing of this blog – and I fully appreciate the hard work they do in helping to protect the public and bringing offenders to justice, often working under significant pressures and with limited resources and budgets.
While it’s absolutely true that a lot of your business processes are important, they also expose your organization to a wide range of potential issues that you may not even realize.
Every manual process performed by one of your actual human employees leaves open the possibility for productivity bottlenecks. Things are getting done, but they’re just not getting done as quickly as they should. It also creates the potential for miscommunications — two people involved in the same process just weren’t on the same page and now they’ve suffered a major setback because of it. The mishandling of information, low employee morale, you name it — these are the hidden costs of those tedious manual processes.
But the good news is that it is possible to make sure that all of this work gets done in a way that allows you to avoid every one of the issues outlined above. It’s called workflow automation and if your organization hasn’t already begun to explore its wide range of benefits, now would be an excellent time to start.
What is Workflow Automation? An Overview
At its core, workflow automation involves both the digitization and automation of business processes, all in an effort to reduce the amount of manual labor required by your employees as much as possible.
All told, there are a wide range of different types of workflows that are prime candidates for automation. These include but are certainly not limited to ones like:
Filing or making changes to documents with a consistent structure.
Reviewing and approving changes that have been made to documents.
Notifying people (like team leaders) when a change to a document has been made by an employee.
Processing accounts payable or similar administrative functions.
The management of records retention and document storage.
Executing process management reports.
And much, much more.
With an intelligent document management solution like M-Files, for example, you can make sure that documents are always routed to the correct person when they’re created or when certain status changes have been made. If you have a single document that needs to be approved by 10 team leaders before it can make its way to a client, for example, the employee who created that document shouldn’t have to spend time chasing down every single one of them to keep things moving. With workflow automation, each of those team leaders can be instantly notified that there is a document that needs to be signed off on and once they do, it continues to move further and further down the line.
Likewise, many workflow automation solutions allow you to monitor, report on and even analyze your current business processes — all to help capitalize on opportunities for improvement on an ongoing basis. Many provide reporting dashboards, for example, that allow process managers to view each step of a particular business workflow in fine detail. This puts them in a better position to eliminate the types of performance bottlenecks that cost time and money, thus improving those processes in meaningful ways.
They even offer the ability to show users a full history of all business process steps, confirming beyond the shadow of a doubt that automation software is getting the job done in a way far more efficient than humans could on their own.
In a larger sense, workflow automation also makes it easier for employees to communicate with one another — which itself is a great way to empower their ability to collaborate. A lot of the workflow automation solutions you would be using include built-in communication tools that make sharing documents and other important project-related data easier than ever. When you make it easier for your employees to work together, you increase the chances that they do — thus improving employee morale and improving the quality of work that they’re able to do in the first place.
In the end, workflow automation is more than just another IT trend or passing fad. It’s an opportunity to optimize processes across all departments in a way that eliminates human error, gets rid of performance bottlenecks and that improves the quality of work you’re able to do with your clients. It improves the speed at which your organization can move because it frees up the valuable time of your human employees so that they can focus on those matters that truly need them.
It’s also a way to save valuable resources while improving both internal and external transparency, which for many businesses may very well be the most important benefit of all.
Recent Comments