Since the end of 2022, speculation about the effect of AI on the workplace has dominated both mainstream and technical headlines alike. Sentiment in the cybersecurity industry has been a mix of both fear and excitement—concern for empowered attackers and optimism for the advent of tools that promise to make security analysts and IT engineers more efficient.
We’re now seeing studies that prove it’s true—utilizing AI as a tool in technical roles does result in higher quality output. In a white paper published this month by the Boston Consulting Group (BCG) titled “Navigating the Jagged Technological Frontier: Field Experimental Evidence of the Effects of AI on Knowledge Worker Productivity and Quality,” two groups of consultants were evaluated over 18 different tasks. Those who used AI consistently outperformed those who did not, over several dimensions of comparison.
One particularly interesting insight from BCG’s study is that AI acts as a skill leveler. The greatest increase in performance was seen by the employees who scored the lowest in initial assessment (+43%.) High level, more experienced employees also saw gains but not to the same degree (+17%.)
But there’s a catch: the BCG team intentionally added a task for the consultants to perform for which the subject matter fell outside the AI’s data set. This “blind spot” ensured the answer the AI returned was wrong, but very convincing. So while 84% of humans got the problem correct without AI help, a whopping 70% got it wrong when using AI.
The conclusion, according to contributing BCG team member Ethan Mollick in his summary article “Centaurs and Cyborgs,” is that over-reliance on AI can make us lazy. And while AI may act as a skill leveler that elevates lower-level employees to perform like top-level ones, experience still plays a large role in quality and accuracy of output. The centaur vs cyborg metaphor is used to illustrate that a user’s approach to using AI is important: is there a clear dividing line between tasks that are delegated to AI? Or is it treated as a constant continuum where some measure of a task always uses AI?
For the cybersecurity industry, which tends to be very stratified in terms of staff capability, such a tool stands to make a profound impact. AI is definitely a leveler that accelerates newer security analysts and boosts productivity of those already at an expert level.
But the caveat remains: how experienced must you be to catch the wrong answers?