These modules of the Atlas reasoning engine kick in once a user inputs a query and gets past the Einstein Trust Layer, which checks the query for abusive content, Mui explained, adding that the first step of the engine is to check and determine if the user input is valid or just chit-chat.
Salesforce defines anything that is out of the scope for the agent to answer as chit-chat and once the Chit-Chat Detector, basically an underlying LLM, finds that the user is engaging in chit-chat, it shunts back the query to the user with a typical corporate response, such as I don’t know about it, as decided or deemed fit by the enterprise using the autonomous AI Agent.
If the query passes the chit-chat detector, it enters what Salesforce calls the evaluation phase where the query passes through another LLM, dubbed the Query Evaluator, which determines if the reasoning engine has enough information to process the query.