Crushbank Exec: ChaptGPT Is An Opportunity For MSPs, But Bias Is A Concern
‘Any MSP should be looking at it and using the language capabilities today in interesting and diverse ways,’ says David Tan, Crushbank co-founder and CTO. ‘This way they can start to understand what the technology does, the limitations and the opportunities.’
Crushbank co-founder and chief technology officer David Tan believes companies in the channel should leverage the benefits of ChatGPT instead of turning a blind eye to it.
“Any MSP should be looking at it and using the language capabilities today in interesting and diverse ways,” he told CRN. “This way they can start to understand what the technology does, the limitations and the opportunities.’
Tan is no stranger to artificial intelligence. In fact, he welcomes it.
“I have a computer science background, I started as a developer,” he said. “I just started playing with AI when the technology came along using my coding background.”
As he now heads up Syosset, New York-based software vendor Crushbank, he’s built AI products on IBM Watson, Microsoft Azure and OpenAI. But now, he’s working with ChatGPT to leverage AI for the channel.
“The OpenAI piece is something that we’re testing right now, building full-blown chatbots essentially the same way ChatGPT works,” he said. “Your customers can start asking questions like, ‘I just got a new iPhone, how do I set it up?’”
While still in its infancy, ChatGPT has sparked many conversations in the channel with some saying it’s a hacker’s dream and others leveraging the AI platform for their business.
Tan sees both the benefits and the pitfalls of the platform, most notably bias.
“At the end of the day, it’s still a machine. It still has to be programmed and managed and controlled by humans,” he said. “But I am concerned about, point blank, white males training the system that will make the content more specific to white males, or Americans will make it more specific to America.”
But those pitfalls can be managed, he said.
Check out what the tech guru said about ChatGPT, its downsides, its upsides and why everyone in the channel should be leveraging it.
How can the channel leverage the benefits of ChatGPT?
I think people need to understand what it can do, and I think there‘s a bunch of ways that you could do it. The channel, in general, doesn’t do a really good job of building documentation optimally, both internally and for their clients. What I mean by that is, you see all the turnover in the space right now. There‘s constantly people coming in and it takes three-to-four months to get people up to speed. I think if people did a better job of building things like runbooks, manuals, user guides… stuff like that, you could get people a lot more productive a lot faster. And I think leveraging a huge language module like ChatGPT is a good way to do that. The underlying technology lets you write plain language documentation that you could actually use without having to understand all the technical underpinnings. Once you do that for your internal-facing people, you can really start to extend that outward and make some of that documentation available to customers.
If you are at a company that‘s managed by a MSP, you need to reach out internally but then they have to go to the MSP. That’s two-to-three levels of people you need to go to get what should be a really simple answer. There’s no reason the machine can‘t do that in the language modules, and ChatGPT is really powerful for that. I think understanding that technology and leaning on it the right way to enhance what you do is what’s interesting and powerful. I mostly focus it on support delivery, but there‘s no reason it can’t be widely used in sales and marketing efforts as well.
CRN has talked to a handful of cybersecurity experts who are gravely concerned about malicious actors using ChatGPT. Are you concerned that this gives hackers a leg up?
Oh, no doubt. I mean anytime that you can replicate a real human being more efficiently and effectively there is no question it is potentially dangerous. Someone was asking me about this recently and said, ‘I don‘t understand why this will do a better job of tricking me.’ It’s really simple. If you are sophisticated enough to use the examples of a real person‘s writing, feed that into ChatGPT and then turn around to use that to craft an email to anyone, they’re going to say, ‘They use the same turn of phrases that [this person] uses all the time, it’s probably from them.’ One of the things that we‘ve always relied on was that people would be smart enough to say, ‘This was clearly written by a machine.’ Or, ‘This was written by someone that doesn’t have great language skills.’ Now that that‘s going away, and you can leverage AI to actually replicate someone, I think there is a real fear in the cybersecurity space for that.
The other thing that becomes even more concerning is that when they feed that into things like voice mimickers, taking voice samples, training a model and building dialogue, you can have the voice synthesizer create it so actually sounds like you. Then obviously the next step is video beyond that. So I think we‘re at the really early stages of the dangers and we need to be super careful of it. The world we live in now where everyone is posting their life on Instagram, Tik Tok and Facebook, getting those video and audio samples is pretty simple.
Are you concerned about ChatGPT using outdated content?
That’s obviously a limitation right now. ChatGPT is not always getting the most up-to-date data. If you went to ChatGPT right now and asked a question about the Superbowl, you obviously wouldn’t get the last Super Bowl, because it’s disconnected. Microsoft just announced their integration with ChatGPT and Bing and that is connected to the internet, so that is going to have the most recent information as it gets better and more widely used. What people don’t seem to understand is that this technology is still very, very young. It will get better.
How do you think MSPs can use ChatGPT to their benefit?
I talked about leveraging it as a way to optimize everything from emails you send. It’s basically a way to have an in-house expert next to you on all things. So right now, ChatGPT in and of itself is just an expert in language and understanding natural language. So again, asking you to optimize emails, or presentations or webinars is great. Ultimately, it will start to get more domain-specific knowledge so then you can start to ask a question. I think it’s still early on in the ways you can use it, but I would say any MSP should be looking at it and using the language capabilities today in interesting and diverse ways. This way they can start to understand what the technology does, the limitations and the opportunities.
Does the same go for vendors as well?
Oh without a doubt. If the vendors don’t understand how this benefits their customers, then they’re missing the boat on it as well. So they should be leveraging it, they should understand it and they need to find ways.
I don’t mean to pick on them, but the whole Kaseya-Datto merger was so mismessaged. If Fred [Voccola, Kaseya CEO] had gotten up there, and I like Fred a lot personally and professionally, but if he consulted a model for some empathetic language on how to present some of that stuff, I think it would have been a much better presentation the first day and going forward.
Where would you like to see ChaptGPT used to the best of its ability five years from now?
So I think the biggest thing is, like I said, I think we need to make versions of ChatGPT that have domain-specific knowledge. So this is the IT version of it that understands IT language. If I asked it about Teams, it knows I’m not talking about the Yankees. It knows I’m talking about a Microsoft product. If I asked about a portal, it knows that could be SharePoint.
At the end of the day, it’s still a machine. It still has to be programmed and managed and controlled by humans. But I am concerned about, point blank, white males training the system that will make the content more specific to white males, or Americans will make it more specific to America. I think there is real fear. OpenAI has done a good job of trying to limit that stuff but it is going to be an ongoing battle. So my biggest fear, and the thing I’m worried most about, are the biases that are inherent in the system. I think we really as a society, and technologists in general, need to be acutely aware of making sure biases don‘t enter into systems.
So you’re saying that not only humans have biases. Robots have them too?
Yeah, absolutely. The easiest example is around health care. If you have a bunch of healthcare records around white males, then you’re not going to get great results if you’re looking at a woman or an African-American. So it’s really important to train those biases out of the system.