Why is my chain.invoke({}) command giving the full model response instead of just AIMessage(content=' ')

I am using ChatVertexAI and the ChatPromptTemplate to provide the model with a system message, and a user message, both of which are stored in separate variables which return a string.

Prompt template

The chain uses LCEL to define the chain for the invoke command

The chain and invoke command

However, the output that I get includes details that I should get if the command was chain.generate() and not chain.invoke(). It should not include all of the response metadata that is being printed here.

https://preview.redd.it/vzocmpoy7vpc1.png?width=1103&format=png&auto=webp&s=3890c438dfef3a70cb5801c820380236a7a62435

Should'nt the output contain only AIMessage(content='.........') and not anything else? I Know I can use definition.content in this case, but in reality I cannot use that as this output is going to be used by langgraph for creating a reflection agent, in which I need to use the output like it is.

I checked all documentation and my prompt template as well the call to the LLM is exactly as it should be, but in the examples they show, the chain.invoke command should not print response metadata.