As Artificial Intelligence (AI) technology is becoming more advanced, it is also becoming less detectable — especially with the creation of companies like “Undetectable AI”, created by Boise State University graduate Christian Perry in 2023.
Perry cited Boise State’s Venture College as a helpful resource in creating both ChatterQuant and Undetectable AI.
“I was still in school when I started Undetectable AI,” Perry said. “I will say that the Boise State Venture College was hugely helpful in my journey, especially for ChatterQuant in terms of introductions to mentors and things like that.”
Ally Orr, a Boise State alumni who majored in marketing and now works in sales operations has witnessed an integration of AI within a sales-based workspace.
While Orr’s company, that she requested remain anonymous, has utilized AI, she touched on how it is important to balance using the technology as an aid while ensuring it doesn’t accidentally leak data or infringe on people’s jobs.
“The company I worked for … we have been really careful when using AI just because … we have a lot of valuable data that can be leaked,” Orr said. “But as we’ve been using it to summarize meetings, send emails, do things like that, I kind of wonder if we’re pulling away from seeing people’s value and instead just getting the work done. How can you get promotions, or lift people up if you can’t see their value?”
While AI may be a formidable opponent for many career fields, Orr feels that marketing possesses too much of a human spark to be seriously threatened by the tech.
“I’d say in the realm of marketing, I do worry that people’s skills could be overlooked by AI,” Orr said. “But I also know that marketing is such a human, very emotional experience- when you see commercials, they want you to feel a certain way. I don’t think AI, at this point, can do that.”
“I think we’re in a very fun and also slightly concerning place now where we can see so many things it [AI] can help but it makes us think about what if it does this or that,” Orr said. “I think we’re starting to see some of those what-ifs both good and bad. I think we’ll probably see a lot of laws now fall into place.”
Don Winiecki, Professor of Organizational Performance and Workplace Learning, studies the intersection between STEM and equity and inclusion.
Winiecki explained that technology is never neutral as it possesses qualities that allow it to be used for positive or negative purposes — something that is subjective in and of itself.
Many new or developing technologies —- AI included, don’t have built-in safeguards due to the pace with which they are being created and built.
“The fact is, technology is always released before it’s totally safe and then we blame the humans who are wielding the technology as opposed to those who developed it,” Winiecki said. “That’s a principal part of the issue, we release technology because it gives an economic advantage or potentially an economic advantage.”
Although placing boundaries on technology may be futile, Winiecki pointed out that placing boundaries on the humans who engage with it could be more productive.
“Ideas of regulating the tool are probably hopeless,” Winiecki said. “Ideas of trying to fashion systems that put boundaries on what humans do could be a different tack that maybe we have a little bit more leverage to accomplish.”
Winiecki noted the threat to higher education that AI poses when used as a substitute for certain skills.
“The goal of academic work is to add value and to put something out there that can be used to create good somewhere down the road, and if that’s what we’re doing fabulous,” Winiecki said. “… But what has also happened is that an academic degree is a whole lot less important than it used to be because the people who don’t have the training can produce the same level of product.”
As AI becomes an increasingly popular topic of conversation, companies like Perry’s “Undetectable AI” that humanize AI content, as well as companies that claim to detect AI will begin to sprout up more and more. As Winiecki noted, users now have the opportunity to decide how this technology is used, as well as what guardrails they want to put on the companies and humans who use it.