In this age of 24/7 Client focus, it is easy for Practice Management to lose sight of the fundamentally important job of looking for internal systematic operating process bottlenecks.
In many cases hidden away in the business, there are functions that remain supported primarily by a mixture of seemingly permanent ‘temporary Shadow Systems” such as spreadsheets and shared drives, all supported by manually driven processes.
Quite often, due to budget prioritization, resource focus or simply not allocating “your own business review time”, opportunities to introduce robust, low-risk digital solutions to help improve productivity and bottom-line profitability are missed.
Typical questions that need to be asked include:
- Can we ensure our compliance and governance processes are adhered to?
- Are our data sources secure and controlled?
- Are we maximising the value of our data?
- Do our processes work seamlessly across the Practice and externally to our Clients?
- Can we guarantee that GDPR request can be answered?
- Am I sure that my KPI’s are based upon fact?
- How can we work smarter?
Many organizations do not have an accurate accounting of their own IT estate. How many and what types of servers? How many and what types of applications? How well utilized is the IT estate? Are there redundancies in application licenses and maintenance? How much of the estate is at the end of service life (EOSL)? Can the estate be consolidated based on newer infrastructure?
Regular analysis of the costs and performance of your legacy applications identifies the gaps, performance degradation, outages, and service interruptions that inevitably arise. The resulting analytics highlight areas that need to be modernized to improve the performance, availability, and support of your systems.
Importantly, the benefits of process optimization have a wider reach than just the business processes. The operational knowledge and experience needed to manage and oversee people-dependent processes tend to be concentrated in the heads of a few key individuals. This results in greater risk to the business.
Ironically, fewer people with complete knowledge of key processes, also complicate succession planning, making transitions prone to disruptions, relationships with partners strained, and general discontent within the organization.
Ultimately, a more structured, stress-free working environment that enables your teams to function effectively, enhances well-being, and can contribute to staff retention, will enhance productivity, increase customer satisfaction and contribute to greater profitability.
Your enterprise content—structured and unstructured—is the most important resource in the organization. And content without context has limited business value. With the explosion of data volume in recent times, it has become a necessity to have a holistic content management strategy to better leverage, manage, and harness the information residing in multiple systems. And metadata lies at the core of content management strategy.
A metadata-driven approach to managing enterprise content has game-changing potential. It allows organizations to add context to their content, which further helps in making insightful decisions and driving meaningful engagements with customers, partners, and all stakeholders.
In simple terms, metadata describes data and enriches the content with information. This makes it easier to discover, use, and manage enterprise-wide content. It is the essential glue that helps bridge content silos across multiple lines of business, drives processes, enables meaningful content associations, and ensures faster access to precise content sprawled across repositories.
Why Do You Need to Rethink Your Metadata Strategy?
- Metadata is the building block of any content management system. With the content management market moving towards decoupled content services, the use of metadata is also gaining prominence
- Enterprise content without context doesn’t hold any business value. Metadata classifies, organizes, labels, tags, and understands your data. This makes information consumption by users much easier
- Enterprises, like yours, are looking towards metadata services that can be system-generated, user-defined, or intelligently extracted from the content. They want their metadata models to help drive processes, content lifecycles, and other policies and business rules
- Metadata can help to secure your data by intelligently extracting critical information from documents and restricting access to it based on the metadata value. For instance, in the HR department, only members of the HR team can view employee files; in government offices, only security-cleared employees can read and respond to incoming citizen correspondence
- Almost every business process is based on or around content. To achieve complete automation, content must behave as an enabler rather than an inhibitor. This can only be done through metadata, which provides context and acts as a driver for your content-based enterprise applications
- File analytics—one of the most sought-after capabilities these days—needs the help of metadata to uncover hidden insights and identify dark data
There are innumerable use-cases and benefits of metadata. And enterprises can realize its full potential through a structured approach.
Newgen can help you realize the full potential of your content by transforming your metadata usage through our contextual content services platform. Newgen’s platform has received the highest possible rating for its metadata services in Forrester Wave: Content Platforms, Q2 2021. Read the complete report for detailed insights.
Risk managers are facing possibly their biggest challenge in these pandemics hit times. Increasing regulatory pressures, complex demand from remote worker management, aligned with previously unseen volumes of data. This perfect storm of circumstances is driven by the complexity and demands of risk management, each with their own unique compliance and regulatory challenges.
Across all sectors, radical transformation is required to address increasing internal and external challenges. The need for creative solutions that offer agility, empower growth and profitability, whilst enabling a comprehensive approach to tackling risk management.
The risk management community are required to identify and address risk and issue challenges, often driven by:
- Environmental and technical Inter-program dependencies
- Internal political pressure
- Cross-organisational initiatives, such as third-party suppliers
- External factors, such as political, economic, social, and legislative
To address the above challenges, Risk Managers need clarity of information, quality, and certainty of data and above all, a 360 view. In addition, they are looking for analytics and insights from the data to support decision-making processes.
Identification typically includes:
- Sources of risk to the business
- Define specific risks and issues
- Identify and create a working risk framework
- Analyse and describe the steps in risk management
- Identify threat and opportunity responses
- Identity and create process flow of information flow
- Identify, create (where required) risk and issue documentation
- Obtain full buy-in and continuous support from the C-Level team
Not uncommonly, identification presents a series of further challenges, mainly from the sheer amount of information available across an organisation. In many circumstances, critical information resides in non-integrated silos across a technology estate. Legacy systems, files in local shared drives and non-protected “shadow systems such as Spreadsheets or databases.
Digital Transformation enables the risk management function to be smarter, more agile, and more strategic
Half of all organisations were unprepared for the COVID-19 pandemic with updated crisis management plans. Many organisations acknowledge a pandemic or similar global crisis risk event was not on their radar of potential threats. With many acknowledging that data resided in multiple sources and needed to be pulled together manually resulting in:
- Critical information missing negating faster or smarter actions?
- None or limited connectivity across systems
- Data, systems, or processes that could have helped confident decisions making were not in place
The first step in transforming risk function is to identify where to improve. Where can improvements be found. What gaps in current mission -critical processes surfaced during the pandemic crisis? The pandemic provided risk professionals with critical, real-life insight into how current processes held up in mission-critical situations.
Wealth of Data
Most sectors rely heavily on access to a very broad range of information. This wealth of information can help risk management in becoming a more strategic area for organisations. Data such as climate change, social unrest, or legislation, is critical when investing in new business opportunities, such as geographical locations. With this enhanced visibility, the risk manager can play a far greater strategic role in the company, ensuring that adherence to localised compliance rules such as environmental, technical, or logistical, will prevent a loss in the first place.
Corporations now need qualified data to add greater value in better decision-making. Risk managers have moved on from simply taking a financial view on risk, they are expected to also support the business goals of their companies in a much more holistic management sense.
And it is here that technology can provide the support and the means to make this happen.
The way forward
Emergent digital technologies are providing huge benefits and support to risk managers facing today’s challenges. From integration via application programming interface (API) to comprehensive use of both structured and unstructured Metadata, enabling intelligent search to extract the full value from data held.
Developing a 360 view of risks, how they interrelate, and the potential impact on the organisation is essential for building resiliency. Consolidation into a fully searchable environment, enabling risk managers and other stakeholders to securely analyse data, share information, and collaborate on business exposure.
Use of Risk Hubs
The frequency and severity of disruptive risk events are undeniably increasing. Risk teams need to have the ability to plan, run scenarios and respond to future events effectively, safe in the knowledge that the data they are using or providing, is as factual and near real-time as possible
The use of portals “Risk Hubs” to securely share data, offers an excellent approach in ensuring the right information is available at the right time. These have the benefit of enabling both internal and external co-operation, along with information updates, real-time exchange, and knowledge share.
Digital transformation of risk management, enhances the traditional business model, transforming it into an aligned, collaborative, and interconnected digital environment. By adopting the correct processes, procedures and by using the most appropriate tools, the capability to confidently lead an organisation through a crisis, swings heavily in favour of the risk practitioner.
Artificial intelligence is a term that’s risen to become one of the most talked-about topics across many technology and business fields. Just look at LinkedIn, for example – #artificialintelligence has nearly 2.5 million followers! By comparison, #digitalforensics has only just under 6,000 followers, which says something about just how interested people are in artificial intelligence.
I think it’s important to have an honest and realistic understanding of what artificial intelligence is (and isn’t), the effects it will have on the world as it advances and how it has already transformed many of the business practices we take for granted today.
Over the next few months, I’d like to dive into the many facets of artificial intelligence that apply directly to digital forensics and investigations. While I’m looking at the subject from one perspective, many of these views can easily apply to other functions, technologies and industries. To begin the conversation, I think it’s important to look at some of the overlooked distinctions in the types of artificial intelligence to understand where we are today and where we’re headed in the future.
ARTIFICIAL INTELLIGENCE: ANI, AGI AND ASI
According to IBM, a leader in AI development, artificial intelligence “leverages computers and machines to mimic the problem-solving and decision-making capabilities of the human mind.” Discussions about AI range from the futuristically mundane (self-driving cars, a reality even today) to the downright dystopian (who hasn’t seen The Matrix?). I think it’s safe to say that self-driving vehicles aren’t going to take the world over tomorrow and enslave mankind, yet the same label is applied.
There must be some distinction under the broader umbrella of artificial intelligence. This is where the terms artificial narrow intelligence (ANI), artificial general intelligence (AGI) and artificial superintelligence (ASI) come into play.
Artificial Narrow Intelligence
ANI, which also goes by the term “weak AI” is where we’re mostly at today. This form of AI is programmed to perform a specific task, and as far back as 1996 we saw this with the famous set of chess matches between Gary Kasparov and Deep Blue. Not only does ANI operate on a specific task, it also uses a specific set of data to base its decision-making on.
With the advent of the internet and so much data available so readily, ANI can foster the illusion of broader intelligence, but realistically speaking ANI lives up to its name of ‘narrow’ intelligence, what many of us today regard as machine learning. The differences between true artificial intelligence and machine learning deserve their own article (or several!).
Artificial General Intelligence
AGI, “strong AI,” moves into the realm of exhibiting the flexibility of actual human intelligence. Probably the best example of this at present is IBM’s Project Debater, which by some estimates can debate topics at the level of a high school sophomore. This kind of intelligence, which lacks what we would consider sentience, is difficult to produce in computers despite the advances made to date in processing power and speed.
ASI raises the bar another level, surpassing human intelligence. This is likely not something we’ll need to worry about until much farther into the future; I’ll potentially touch on ASI in an article down the road.
WHAT DOES ANI MEAN RIGHT NOW FOR INVESTIGATIONS?
There’s always a conversation about whether artificial intelligence will someday replace examiners, which I think is unlikely. There is simply still too much value in the human perspective and decision-making process to expect computers to take over completely given the state of the technology.
What is true, however, is that ANI has changed the face of investigations. Gone are the days of heavy manual file carving or hex review; there’s simply no need to get that technical anymore inside of every investigation. And while I’d rather not think too much about it, artificial intelligence has done wonders by limiting the amount of time examiners need to spend looking at the disturbing images and videos that make up CP/CSAM cases.
Artificial intelligence, even at the ANI level, has come a long way in its ability to automatically identify things like skin tone, body parts, drugs, weapons and other common artifacts that can lead investigators to the truth in a case.
It’s interesting, as I considered this topic, just how far technology has progressed. It’s possible to do so much more in an accelerated window of time as an examiner. I’m not a computer ‘nerd’ in the traditional sense – a fact that I’m sure many IT departments I’ve worked with can attest to – but I get genuinely excited as a forensic examiner thinking about the possibilities that exist by combining the Nuix Engine with existing artificial intelligence capabilities.
And I’m looking forward to exploring the topic of artificial intelligence, along with other investigations subjects, in the articles to come!