Why is my chain.invoke({}) command giving the full model response instead of just AIMessage(content=' ')
I am using ChatVertexAI and the ChatPromptTemplate to provide the model with a system message, and a user message, both of which are stored in separate variables which return a string.
The chain uses LCEL to define the chain for the invoke command
However, the output that I get includes details that I should get if the command was chain.generate() and not chain.invoke(). It should not include all of the response metadata that is being printed here.
Should'nt the output contain only AIMessage(content='.........') and not anything else? I Know I can use definition.content in this case, but in reality I cannot use that as this output is going to be used by langgraph for creating a reflection agent, in which I need to use the output like it is.
I checked all documentation and my prompt template as well the call to the LLM is exactly as it should be, but in the examples they show, the chain.invoke command should not print response metadata.