"Humans are central to any AI system because human knowledge is required to interpret the inputs and outputs," it says.
"Inputs for AI models are built through carefully analysing the external world, which can only be achieved with human involvement. If experts in a particular project's domain are not consulted, then valuable knowledge could be missed, and if too few people are consulted, then this could increase the risk of introducing bias."
Building a predictive maintenance algorithm will demand input on fault categories from reliability engineers, for example. "Maintenance work orders may also need to be inspected and interpreted to determine which breakdown events can be attributed to which maintenance activities," GMG says. "This human knowledge-gathering exercise is necessary for making sense of the complexities associated with all of the processes and systems before generating AI models for them.
"Once a model has been built, it will require humans to interact with the output and interpret the results."
The GMG says researchers and practitioners in the AI field are making new advances and discoveries "at an incredible rate".
Its report, produced with input from a number of large miners and mining service groups, says AI techniques that allow for task automation by machines are increasingly being used in mining to optimise processes, enhance decision-making, derive value from data, and improve safety. The advent of "elastic" cloud-based computing and storage and appropriation of graphical processing units (GPUs) for building and training complex models have "further revolutionised [the] field".
But there "is still confusion about what AI is and how it can be applied to mining".
"As a result, mining operations still face many challenges with implementing AI, such as establishing a data infrastructure. Many mining stakeholders also have concerns about how AI will affect the workforce; they also worry about the risk of committing to a multi-year project and failing at it," GMG says.
"Industry and stakeholder priorities may not always align with implementing AI.
"Industries such as mining that have been around for a long time might be wary of adopting AI due to the importance placed on established manual processes and the upheaval implementing an AI-based innovation would have on these.
"In terms of stakeholder priorities, on-site stakeholders in supervisory positions may be reluctant to embrace an AI innovation project due to the risk of assigning others whose regular duties are outside of the project's domain, especially if they will be involved in the AI system's continued maintenance. In particular, if the team is mostly internal, then relying on a small team of AI experts could introduce the risk of resource overload and mismanagement."
Technical risks associated with implementing AI into long-established systems and processes required thorough preparation to eliminate model biases and sub-optimal models, and misuse of AI protocols.
Ethics were also "an increasing concern when it comes to implementing AI, especially concerning bias and privacy".
"AI applications are not as objective as many people think because algorithms can incorporate the biases of their developers. AI has already been found to exacerbate existing human biases in areas as diverse as hiring, retail, security and criminal justice," the GMG says.
"The potential for misidentifying employees is something to keep in mind when implementing AI based on data from sources such as closed-circuit television."