Q&A: Jamf CTO on how generative AI will support Apple admins
I recently wrote about Jamf’s plans to use Generative AI to support Apple admins.
Since then I’ve caught up with Jamf Chief Technology Officer, Beth Tschida, who offered some more insight into the company’s intentions.
The company is working on two features to get things started, she said.
The first is a chat interface for interacting with Jamf documentation, support, and community generated content.
The second is a security alert analysis and auto investigation capability.
For the record, here’s what she told me:
What is it for?
Securing Apple devices organization-wide is no small feat; the knowledge required is extensive and often elusive. Off-the-shelf AI answers may be decent for mainstream management and security questions, including those about Jamf products, but they falter when faced with complex, undocumented scenarios… of which there are many in this unprecedented time of security threats and quick Apple growth. To address this, we developed a custom Large Language Model (LLM) app, linking it directly to our documentation and user forums. This kept the AI on track more reliably than the basic AI bots.
As for our security analysis tool, we’re taking a different route. The magic in AI output often lies in the quality of the prompt. Rather than put the onus on the user to generate the perfect query, our tool automatically crafts the ideal prompt, aiming to offer the most actionable and personalized information for the situation at hand.
How does it work?
When it comes to our chat interface, think of it as an AI bot on steroids. It’s not just spitting out generic answers; it’s tailoring responses by diving deep into our extensive documentation and the treasure trove of real-world insights found in the Jamf Nation forums.
This dual linkage helps us keep the information not just accurate but also fresh and context-aware. It’s like having an expert who not only knows the official playbook but has also spent time in the field, gathering that irreplaceable “institutional knowledge.”
For the security analysis feature, imagine an admin or an analyst gets a security alert.
They push a button and we take that raw, anxiety-inducing data and make it useful. How? We enrich it with additional telemetry—like what happened minutes before and after the alert—and then we let our LLM model play detective.
It doesn’t work alone; it’s backed and primed by our threat research team’s insights. The result is detailed hypothetical root cause analysis. It provides a deep dive that not only suggests what might have happened but also explains why we think so, all while giving you a roadmap on what to do next.
So, in a nutshell, we’re offering a robust, context-aware chat interface and a data-rich security analysis tool that, together, aim to guide you effortlessly through your decision-making process. We’re providing actionable insights, helping you either validate or tackle issues head-on.
How do you protect against garbage in, garbage out in this domain of AI?
We are very aware of this problem and within our use cases we can sidestep GIGO by removing free text or chat interfaces where possible i.e. the alert use case. And even when we do want to use this approach, we have already started to implement thorough and highly validated data ingestion to avoid garbage in, with very clear warnings and instruction for end users on how the outputs should be used.
What’s the protection if things go wrong?
Following on from the previous answer the main issue is users taking the LLM responses as 100% correct and just using them.
As mentioned. we already expose clear warnings but we have also done extensive prompt development to make sure the model identifies when it has gone away from its assigned knowledge base and hence minimizing any potential hallucinations. So much of it comes down to how these features are positioned to the user.
For example, in the security analysis, prefacing everything that it produces with a bold faced heading that says, “Hypothesis” makes it hard to mistake that they are looking at a best guess based on limited data.
It’s worth pointing out that in both use cases, the GAI is functioning in an advisory role and the humans are still at the wheel. This will be the right approach for some time until we have a firm grasp on overall reliability and that will only come with our customers interacting with these tools and ongoing refinement.
Equally, what is the benefit when they go right?
We believe that all our use cases will mean less time for Jamf admins and Jamf employees troubleshooting and getting answers for their questions without having to trawl through pages and pages of documentation to then only need support on the hardest and most pressing problems. Then further down the line it’s making complex interactions in our product simpler and more highly explainable for any Jamf admin and user.
In the realm of endpoint security, it is becoming dangerously common for security duties to be piled on to IT admins. They’re given tools they can’t use and data they can’t interpret and the situation is not unlike forcing an untrained flight crew to land a plane. It’s all fine and good until a company gets compromised and discovers that they had alerts along the way that could have saved them.
By leaning on GAI, we are able to rapidly give even entry level analysts a starting point and the ability to rapidly triage the alerts they are receiving and make timely decisions about what to do.
Why does Jamf need input to help optimize this system?
Right now one of our main knowledge bases is in fact user generated and we have to make sure this is highly validated so that our LLM’s answers are as correct as possible – this needs our Jamf expertise and admin experience brought together. We sit at a unique place between this data and the intelligence our many support, ThreatLabs and development teams have.
What is the promise?
Our promise for now is to innovate while empowering admins to be productive at work and we will be doing so with responsibility at the forefront. AI is a very powerful and intriguing suite of tools but we won’t be rushing to put it into our products and customers hands without reasonable guardrails, policies and protections in place to build trust and minimize frictions.
Where to next?
As mentioned, we are on this path to production for two main uses cases and after JNUC we are focusing on how to internally test this before offering customers betas. We are also still looking at various new use cases that will make our offerings easier to use, more compelling and understandable for all.
Jamf has been making great strides in extending the security of the Mac ecosystem. Most recently it acquired dataJAR, and last year launched a new fund to invest in security products for the Apple ecosystem and acquired ZecOps, and introduced an Executive Threat Protection feature.
The idea of using GAI in defined domains is almost certainly going to become part of any digital transformation project in the years ahead.
Please follow me on Mastodon, or join me in the AppleHolic’s bar & grill and Apple Discussions groups on MeWe.
Dear reader, this is just to let you know that as an Amazon Associate I earn from qualifying purchases.