florida.HIGH.TECH florida.HIGH.TECH 2018 | Page 61

understand that. Ontologies make this knowledge explicit and give the computer rules for working with and inferring more information with that knowledge.” While creating the next generation of machine- learning algorithms that would enable AI systems to have “common sense,” Amanda runs into the issue of avoiding stereotypes, especially when it comes to the concept of identity. “How do you convey common sense to a computer without generating stereotypes? You have to do this in a way that overcomes potential bias rather than confirming existing biases.” In most cases, machine-learning output is only as good as its input. According to Lawrence Hall, Ph.D., distinguished university professor at University of South Florida, it’s the human element that brings bias to datasets. To achieve unbiased output, extreme care must be taken by human users while selecting training data and developing the algorithm. As the people classifying and labeling data are trained to better recognize their own unconscious biases, machines will be able to pull from cleaner, more unbiased datasets. Lawrence’s colleague at the UCF agrees. Gita Reese Sukthankar, Ph.D., professor and director of the Intelligent Agents Lab at UCF, forecasts an industry wide shift toward more advanced machine learning that reduces the need for human input, thus reducing the influence of human biases. The workforce will undoubtedly experience a shake up as businesses continue to adopt and advance AI, but The Corridor’s researchers and industry leaders would agree the effects won’t all be negative. “My personal feeling is that it’s going to kill some jobs, but it’s going to create new jobs,” explained Lawrence. He predicts many new jobs created by AI will involve curating and fine-tuning data to maximize the accuracy of machine-learning systems. As AI-enabled technology becomes more integrated into our daily routines, human input and guidance will still be critical, but perhaps won’t be needed forever. Rather than recording and analyzing data manually, for example, humans might someday learn to program systems to do this work for them. The processes that enable us to ask smart personal assistants like Siri and Alexa about the weather, deposit a check and mark email as spam – processes that enable us to work smarter, not harder – are continuously learning, improving and advancing without signs of slowing down. “We’re just going to have to wait and see what’s next for AI,” Amanda said. “It all depends on organizational forces and on people’s creativity – and how those two things interact never ceases to surprise me.” There is plenty of speculation as to what’s next for this burgeoning discipline, but one thing remains constant: The Corridor’s researchers and entrepreneurs will be at the forefront as the future unfolds. While much of today’s AI runs on “supervised” machine-learning algorithms, this kind of machine learning is “unsupervised.” Whereas supervised machine learning relies on data labeled by humans, unsupervised machine learning needs no labeling assistance – essentially, it’s “smart” enough to analyze data without any guidelines or variables. While advances in AI technology trend toward unsupervised machine learning, most researchers would agree consumers should temper their expectations, since this likely won’t become the norm for another five to 10 years. f l o r i d a . H I G H . T E C H . c o m | 59