Gartner Inc.

06/01/2023 | Press release | Distributed by Public on 06/01/2023 13:02

Another Ride on the Roller Coaster

It looks like 2023 is going to be the year of hyperbole and FUD in the cybersecurity world. In short order, we are grappling with the terrible/wonderful promise of generative AI, China owning all our base through TikTok and the Metaverse running into the virtual ditch.
Perhaps I have simply been in this business too long (yes, I am one of those people who started with punch cards and a mainframe), but I still have the naïve expectation that leaders will make some attempt to see past the frenzy of excitement or horror thrown about the industry to discern what is really going on before they take action.
Unfortunately, the IT industry runs on hyperbole. New technology (or old tech with a fresh coat of paint) is promoted as the solution to our problems and any negative aspects of the technology are deprecated or denied. Let's take a look at some of wild rollercoaster rides we are currently experiencing:

TikTok-

Ah, social media! The platforms we love to use but love to hate. There are all sorts of security issues with social media and we have discussed and experienced these problems since social media went mainstream with GeoCities and MySpace. I will not spend time here delving into the particulars of the TikTok controversy (see my earlier blog on TikTok: https://blogs.gartner.com/andrew-walls/fear-uncertainty-doubt-and-tiktok). The controversy has put security leaders between a rock and a hard place. Government leaders in the US and many other countries (as well as states and provinces) are pushing for bans of the application despite the lack of publicly available evidence that the app presents a unique and substantial security risk.

Security professionals work hard to manage threat exposure by identifying threats and building business cases which support investments that will effectively reduce threat exposure. Now, government leaders are telling us to ignore all of that threat analysis stuff and just take their word for it. This is FUD and security theater, the very things we have spent years avoiding in order to build credible, defensible security programs.

This puts security leaders in a difficult position. They can either maintain an evidence-based approach to prioritizing security investment or they can acquiesce to political forces and support a ban of TikTok. Rejecting government edicts for a ban can have repercussions far beyond the security department, particularly if the government controls your funding or is a major client. Supporting a ban puts your own credibility as a security professional on the line and employees will question future security actions as being politically motivated rather than based on solid evidence. Some declassified evidence of security issues with TikTok would go a long way to resolving this tension. Until that happens, FUD and politics will drive some security leaders to make reluctant, data-free decisions about threat management.

Generative AI-

A recent survey conducted by Gartner (n=1679) indicated that 68% of the executive respondents believe that the benefits of AI outweigh the risks. 27% responded that they don't know. I am with the 27%! At this stage of the game, we can only speculate about the risks resulting from integration of GEN-AI into applications and processes. A few organizations are seeing immediate benefit from use of GEN-AI in various use cases and I am confident that benefits will continue to develop. But, experience teaches us that there is no free lunch. AI will cause problems, some of which we can anticipate and some of which will come as a surprise. OpenAI has done excellent work by unveiling an impressive GEN-AI platform which is stimulating the world to explore and invest in AI. This is a good thing, but, there will be problems as we move past ChatGPT and integrate GEN-AI and related AI concepts and models into our organizations. We will grapple with data security, privacy, IP ownership, and explainability of outcomes. And we need to consider carefully how we deal with the social and cultural impacts of AI adoption. People growing up with AI are the supply chain for new employees and leaders. What effect will pervasive use of AI have on education, work culture and the expectations of communities supporting our organizations as employees and clients?

I am not waving the FUD flag here. I think it will be great to have AI working with us, but I fully expect that AI will have negative impacts and we should advance carefully and identify the security threats as soon as we can. We will find that, in some use cases, the risks outweigh the benefits. As an old IBM ad campaign put it: 'Enthusiasm is great, experience is better.' Our enthusiasm for the promised benefits of AI should not inhibit investment in finding and mitigating the risks. Be part of the 27%.

Metaverse-

It appears that Meta's pursuit of a Metaverse for social and business interactions is not going according to plan. Anyone surprised? If you are - or were - a user of Second Life, you shouldn't be surprised. The concept of an immersive, virtual world is not new, as Linden Labs demonstrated with Second Life and multiple gaming environments have leveraged to provide dazzling environments for gameplay. And that is just the point; immersive, virtual reality is not new. In fact, way back in 2007 I published a research note titled "How to Create Effective Security Management in Virtual Worlds."

Why would anyone expect an immersive virtual reality platform to suddenly explode in popularity when no previous, masterfully executed implementations have done so? As an avid reader of William Gibson's books, I thoroughly enjoy his evocative descriptions of flying through virtual space to hack through black ice security perimeters. Unfortunately, there are many fantasies that are great fun to read, but not so great to implement or use. Meta's implementation of a virtual world (the Metaverse), complete with an Oculus headset, is simply the latest example of a small but influential group geeking out over technical wizardry. 'Really cool stuff' does not constitute a business case.

We all know this, but our industry seems to relish demonstrating the immaturity of our profession by chasing the latest shiny thing, without regard to the improbability of its viability.

Summing up-

It is not my intention to indulge in 20:20 hindsight to criticize technologists, CIOs or CISOs. We need to learn from these brouhahas and be willing to be the adult in the room who does not allow enthusiasm to blind us to real world experience. It is the job of security and technology professionals to provide honest assessments to enterprise leaders:

  • The TikTok controversy is political in origin, remains a political issue and is not a security matter.
  • GEN-AI is shiny, exciting and new, but it will tarnish, crack and evolve into a boring but useful toolset.
  • The Metaverse is a fantasy that few people will ever want to experience.

Each of these ideas are riding the hype rollercoaster and will eventually complete a circuit of exaggerated claims of impact and importance and through the trough of disillusionment before starting the hard work of grinding up the slope to productivity. All of us should consider this entire technology lifecycle and curb our enthusiasm for the political mess, latest gadget or scifi fantasy which has fired up the media. Be the adult. Be part of the 27%. Embrace with caution while you take the long view of the road ahead.

This means that we must be pragmatic about what these exciting and new capabilities will actually mean to our organizations. What are the real use cases which your organization could pursue or is exploring? Security thinking needs to be integrated into these early stage explorations to produce viable business cases. If you cannot find a realistic use case with a matching business case, you are probably looking at something which is more hype than substance. By all means, explore, be creative, and test out new ideas. This is how we innovate. But don't be fooled by hype.