More than a month into 2024, security experts are discussing the new threats on the horizon, the benefits and disadvantages of generative AI, what trends will emerge, and what we may see in the coming year.
We turned to our own in-house experts and asked them to submit their predictions for 2024 and beyond in cybersecurity and technology overall. Here’s what they’ve shared.
Mature Data Security Programs
Microsoft Co-Pilot will drive the urgency for data security programs to mature. Data discovery, classification, and protection will move up in the list of urgent initiatives as Co-Pilot proliferates throughout the O365 stack.
Today, compliance comes first, and insider threats come second in most data discovery and classification initiatives. Co-Pilot will increase the ease of questionable internal action to the point where companies will need to take action.
Let’s broadly classify insider threats related to data exfiltration into two categories: accidental or malicious. Accidental is…inadvertently sending internal PII data to a Gmail address, which breaks corporate policy, or sending customer data to the wrong customer.
Malicious is looking for corporate sensitive data and downloading, emailing, etc., sending it outside with intent to steal. The effort it takes to search for files and export data from databases takes deliberate action.
Imagine that all users can ask ChatGPT-style questions about the entire Cory data set. “What are the top 100 customer names and SSNs?” “What is our CEO’s salary?” Suddenly, malicious theft becomes exponentially easier and takes on a casual, less tactical nature.
Microsoft already provides tools to ensure data is properly tagged and has the right controls, but it requires every organization to take on the initiative. As soon as these types of co-pilot queries get shared across companies, this effort will become urgent.
AI helps query languages move behind the scenes of cybersecurity products and expertise. You can’t be a security analyst, threat hunter, SIEM, XDR, EDR, vulnerability admin, or many other cyber security roles without learning some set of query languages.
Front-ending each product’s query language with an AI chatbot is a natural use case already making its way into security vendors’ products. Examples include SentinalOne, and this will save time and definitely lower the bar of expertise.
Contributor: Novacoast has been providing data security program initiatives for years and can help companies prepare for Co-pilot.
Human Risk Management
We will see increasing interest in an adoption of human risk management tools and practices. We’ll see the market start to become more aware of tools that quantify how risky the human component of security is and quantify how each employee in your company contributes to that risk.
It’s also likely we will see human risk management tools start to converge with security awareness programs and see more security awareness leaders demand the need for better visibility into what their employees are doing in their environment.
Advancement of Artificial Intelligence (AI)
We believe it’s likely that AI will make it into anything that involves reading through data. The AI will “read’ all of the content, and everything will be built on top of that output.
Some examples:
The obvious one is search. LLMs will make a personal “Google Search” for everything you would normally read. So, you’ll start with a summary version of a document, and then the details will turn into more natural language questions as if you were talking to a team member. The AI will ingest documents, emails, etc., and then you talk to it to get the details with references to relevant sections. You are already seeing this with Microsoft.
Projecting this out, we predict a personalized AI for all of your documents, emails, articles, code, etc., and it will be cumulative. I see this even moving to be something you “keep,” like a personal email address. People will have personal AIs that are built out over their careers and go with them even as they move across companies, etc.
Another prediction is that AI will take over entry-level analyst positions. The AI will be the only one responsible for performing initial incident qualification, evidence gathering, and basic analysis, and analyst jobs will progress to what we currently classify as Tier 2. The most basic version of this would start with analyst assistant tools that categorize attack types and prioritize them for the analyst.
Another guess: the SOAR platform will merge their playbook content with LLMs in order to classify incidents and suggest investigation and response procedures.
Identity and The Future
Data is King, but is it a benevolent one? Up to now, sharing identity data has been voluntary and an opt-in. Do I let this app track my movements, etc.?
With basic authentication being vulnerable and identity being the new perimeter, authentication vendors are now requiring access to track identity data to do better. Apple Stolen Device Protection tracks your location to better secure the device when it’s at a location that is not usually associated with you.
More mandatory data collection leads to a richer experience and more targets. We will see more attacks targeted at the central providers, like the Okta incident. We will see the results of losing centralized, comprehensive data. And we’ll see who uses this data for good and who abuses it.
Will passwords die? I think we are at the tipping point where enough organizations and applications/services have the ability to stop using passwords. Passwords cause enough problems and pain, and primary authentication without passwords will become widespread.
Identity solutions will become more integrated and complex. We have reached the point where we have mostly accomplished the uninteresting part of identity that we’ve slogged through for 20 years—the plumbing of connecting all of the systems to put the basic identity infrastructure and services in place. Now we’re going to have to make those solutions smarter and better by actually using them together. Risk analytics can drive JIT access enforcement and Zero Trust provisioning in real-time.
Authentication can be used to remediate risk and change. Decision-making and governance can be streamlined, effective, and real-time. Complicated use cases like personas can be effective without being burdensome.
Where We See Data Security
Microsoft will lead in 2024 with generative AI due to three factors:
- 49% stake in OpenAI
- Early adopter of the flagship products, i.e., M365
- Integration of the generative process respects data protection through existing controls in addition to labels
Organizations where information workers make up the majority of the workforce, where they encourage technology purchases, and where the culture supports effective usage will see a productivity increase of up to 10%. It would have been higher except that it’s unlikely there will be 100% adoption amongst staff.
The U.S. government will introduce the fourth data privacy bill. Will it pass? We’ll wait to see, just like everyone else.