Love that this is the example of how LLMs working together as teams, called multi-agent systems (mas), hallucinate in an altogether nuttier level.
![Screenshot of article quote that states: “There are downsides. llms sometimes have a propensity for inventing wildly illogical solutions to their tasks and, in a multi-agent system, these hallucinations can cascade through the whole team. In the bomb-defusing exercise run by darpa, for example, at one stage an agent proposed looking for bombs that were already defused instead of finding active bombs and then defusing them.”](https://cdn.uploads.micro.blog/25423/2024/77e8ef90bb.jpg)
Love that this is the example of how LLMs working together as teams, called multi-agent systems (mas), hallucinate in an altogether nuttier level.